 Hello and welcome to another OpenShift Commons briefing today. We have Clayton Coleman with us He's one of the lead architects for OpenShift and works as a core contributor to Kubernetes and He's going to give us an overview of OpenShift origin today and give us some insights into the current release 1.3 That's going out the door now as well as talk a little bit about the future and the road map for the road ahead for OpenShift origin and Kubernetes. So take it away Clayton Thank you Diane. So my name is Clayton Coleman as you heard architect for OpenShift and I'm also heavily involved in helping make Kubernetes of success as both as a community member and as a redheader so OpenShift is ultimately a platform for application developers. Our goal is to build and support the kinds of workflows and development tooling that developers need to start with an idea and take that idea all the way to production including not just how the application is run, but how you manage change in the application and that includes things like continuous integration and community deployment. It also means packaging up your source code into Docker images for running on the platform as well as the ability to not just run your application on a production server somewhere off in the cloud, but to also be able to run that application the same way on your local laptop So it's very important to us to enable that end-to-end story as a developer where you can recreate the same application environment for any kind of application no matter where you need that application to run. A question I get asked a lot is how is OpenShift different than Kubernetes? I think a lot of people today know what Kubernetes is. It's a container orchestration engine. Kubernetes is a great place to run applications and with OpenShift our goal was to take the next step, which was not just have a place that's great to run applications in as a developer, but a place that supports running many many applications as well as the teams and operations necessities so that your applications keep running. And so we provide today a secure multi-kind of distribution of Kubernetes. We've added a number of tools that I'll talk about in just a few minutes for supporting the entire software development lifecycle as well as helping to make easy integrations with all of the source code to production flows that matter. We want to make it easy to spin up and deploy applications, whether those are applications that you've designed and someone else in the community has designed or whether that's something that has been provided for you by your administrator in the largest settings. And we think ultimately that applications aren't just web applications. There are also all other kinds of applications. There's databases and the applications that have grown over the years to be very complex. And at the end of the day, if we can't provide a solution for all kinds of developers, then we don't think Kubernetes is a very interesting place to run applications. And ultimately, we want to bundle it all together in an easily and friendly, well documented and easy to get running set up so that you can get it all together. So with that being said, OpenShift has a number of core goals. Those core goals are really to be the best place to run all applications. And we think that Kubernetes is the best place to do that. On top of and around that, we want to build well-integrated tools to make developers happy and productive and to integrate the tools and workflows that people are familiar with out there in the real world, things like GitHub and GitLab, Jenkins, additional types of web frameworks like the Netflix open source stack, and all of the different ways that we develop web and internet applications should really ultimately be easy to consume from one place. We want those applications to be portable because if the application isn't portable, then not only are you locked into a particular choice, but it's very hard for you to deploy that application repeatedly and rapidly. And so the consequence of portability is ease of deployment. And we want to make the, we want to empower developers in a way that's been very difficult before. Most people are used to thinking of having the resources of their computer available to them. So they might have a laptop and a desktop and some people have access to maybe a couple of servers. But in the future, there's a lot more, there's a lot more out there than that. And we have cloud environments. Those cloud environments might offer tens or hundreds of machines that for the most part are not doing anything useful. We want to build abstractions that let developers use those. For instance, to run builds on or to be able to run background tasks or to be able to split their application up into much smaller pieces that can consume resources as they need it. And so these four goals together really help inform what we do in each release. And as we get closer and closer, we'd like to add more release or more goals and integrations. But this is the heart of OpenShift. And for those who aren't familiar with OpenShift, I'll say OpenShift built on top of Kubernetes, we make it easy to run Docker containers on the machines in a cluster. We bring in persistent storage so you can have stateful applications, things like databases, but even just normal web applications that might need access to a file system that lives longer than a container. And that really brings the best of both worlds. You can build applications that need to deal with state and you can also benefit from the flexibility and resiliency of a cluster. We offer a service layer that helps bridge the gap between applications both running on and off the cluster. We make it easy to expose your application to end consumers and we integrate source code, CICD and existing automation into a flow that works between both developers and operators. So with that out of the way, what have we done with OpenShift 1.3? So this is the third release of the newest iteration of the OpenShift architecture. We launched OpenShift 1.0 just about just over a year ago now. And in the last year, we've made some some huge fundamental improvements both to Kubernetes as well as to OpenShift on top. In the release, OpenShift 1.3 release took place on September 15th. It's up on GitHub for people to download and we've started building and releasing images for those that are out there for people to try. There's a huge number of features that we delivered in OpenShift 1.3. The biggest would be all of the hard work that folks at Red Hat and Google and other companies have made to improve Kubernetes had been released as part of OpenShift. We ship and support a version of Docker that's very well-optimized to work with Kubernetes that's been tested at scale and has a number of important fixes. And the combination of these two really helps administrators bridge the gap between a rapidly changing Docker upstream and a stable production environment for applications. One of our signature features was an integration with CICD through Jenkins. I'll talk about that in a few minutes. We've really improved the ability of applications to consume configuration that is shared between many different components. The web console has gotten some huge upgrades. And as you go down this list, a lot of the things that we focused on in this 1.3 release really focus on continuing to refine the platform and to bring all of the integration points together for dealing with Docker images in a secure way and ensuring that you can build real world applications and deploy them. The biggest signature feature, and this is still in Alpha State in OpenShift 1.3 and in OpenShift 1.4 we hope to bring into beta or final availability, is the integration of the Jenkins pipeline feature. Jenkins 2.0 had launched pipelines which are a way for a developer in a single file to describe a continuous integration and deployment platform. We've added a deep integration with that into the OpenShift console. As you can see from the screenshot, when you're running a very complex build pipeline, I've got multiple stages and those stages might run in parallel or deploy to different environments. OpenShift can surface that to you and give you access to all the information about how those deployments are progressing. In addition, what we don't show here is the integrations that are possible from Jenkins and Jenkins pipelines allow you to easily use OpenShift builds and deployments and all of the concepts that exist in Kubernetes to actually deploy a manager app from within Jenkins. We've really tried to bring together the idea of an environment to run applications and the process and the pipeline tools in Jenkins that allow you to set up a repeatable development process that helps you automate that end-to-end life cycle all the way from a pull request to an application being rolled out in a deployment environment. The integration into Jenkins is an image that we ship or deliver as part of the OpenShift open-source project. It's Jenkins with a special plugin that integrates it very deeply into OpenShift. So as a developer in OpenShift, I can start off on the command line and launch and fire up a whole deployment pipeline in a single command. If I start a project that has a Jenkins file checked into it, OpenShift can automatically detect that Jenkins file and provision a personal Jenkins server for you in the OpenShift environment, which means really that for a developer it requires no extra additional work to start and run Jenkins. It's really something that the platform is managing for you. And so we think that this integration others like in the future really helped bridge the gap between your production environment and how things were running. And on the development side, how you get to the point where you can take advantage of all the power of the cluster. We've done some major improvements to the OpenShift user interface. The biggest change that you might notice as a developer is that we've tried to start breaking down an overview of the process that your application goes through to be deployed. So at the top of the screen, you can see my pipeline build is actually in progress. And I'll get a readout of the status and what stages it's passing through. When that application deploys, you'll actually see the rest of the UI react. And so in this case, I'm running Jenkins in my cluster, obviously, but I have a front end application that's going to be deployed by the Jenkins pipeline. And then I've got a database and we've added the metrics information that's surfaced up through OpenShift that's both a part of Kubernetes as well as some of the additional features that have been added in by OpenShift to really make metrics easy to consume as a developer. And so here you can see that my front end application is currently, it's a Ruby sample application. It's currently using 80 megabytes of memory. And at this point, it's not receiving a ton of traffic. So you can see it's basically idle, but it's running with two pods. And the big theme has been to surface information that was already available and presented to developers in a way that allows them to understand the relationships between those applications. In addition, not shown here are a number of other improvements to the structure of the application. The left bar, you can see it be broken down the different ways that you can interact with your applications and do another number of subsections. And the add to project page has been significantly revamped to allow you to both create Docker images directly. So spin up a whole new application just from a Docker image that's located on Docker Hub or private Docker registry, as well as to consume other images that you've already created. Another really big feature that was part of OpenShift 1.3, and it gets to what we talked about being able to run all kinds of applications, is the new alpha feature in Kubernetes 1.3, which is known as Petsets. And I've been involved in Petsets and the design and iteration of Petsets for a long time. And what we really want to bring forward with Petsets is make it easy for applications that need to have some connection to persistent storage and also need to have a persistent identity and to bring those together in a way where the individual units of software that are running on the cluster may come and go, but the idea of the entire group of those units of software stays in place. And so as an example, I might be running a ZooKeeper cluster that has five members. And in ZooKeeper, each of the members of the cluster needs to know that it's ZooKeeper 1, ZooKeeper 2, ZooKeeper 3, etc. And each of those needs a persistent volume to share storage. The Petset makes it easy for you to define the template for what each of those five members is going to look like. And then the Petset ensures that it creates those instances in a predictable order and it gives the ZooKeeper itself a chance to react to the configuration. Because one of the hardest parts about getting any kind of distributed application development correct is in the real world systems fail, you get disconnected from remote servers, and having a predictable order and process for managing your stateful application is really critical to ensure that your application continues to work as necessary. And applications like EdCD or Cassandra may need less help, whereas some older applications, specifically a Java application that you may have designed 20 years ago that is still using local persistent storage or using shared memory, might have slightly stronger requirements on the level of notification it gets as it leaves and joins the cluster. And so while Petsets are still pretty early on, one of the goals, one of our goals is to get a wide range of use cases, things like shared databases like MySQL, Valera, or a Postgres MasterSlave cluster, Cassandra, EdCD, ZooKeeper, and even more of the common tools that developers are using to build and run their applications to make Petsets be a solution for running those applications safely on Kubernetes and OpenShift. And over the course of the next few releases, you'll see a lot of improvements to this as we take feedback from the community. So I highly recommend that if you're interested in Petsets in particular, you get involved in the community, and this is something that we can use to really drive the state of shared applications much further in the world. As part of the Petset work, another key focus was ensuring that developers who want to build applications that are more complex than just a simple hello world app sometimes need to do a couple of things. They might need to share configuration, they might need to know some information about the environment they're running in, and they may need to perform initialization before their application starts running. So a great example for initialization is you've got a front-end service and before you can even start up, you want to wait to make sure the database comes up, and then you also want to wait to make sure that another backend database or a backend web service is available. And the idea of a pet of an init container is that it allows you to run a container before all the other containers in your application are run. And you can actually run several, they're executed in order if they fail, they get retried, or they halt the operation of your application containers depending on what you specify. And these init containers can download data and write it to shared disk. So if you need to pull a configuration file from a shared location, or if you need to initialize a database, or even download binaries for the latest version of your application if you want to have a container that's really lightweight and update from some other store. The config map is a new concept, it's just like secrets, it allows you to store text information. A great example would be the MySQL configuration file, that MySQL configuration file might have a number of parameters that you want to manage outside the application, but you want to ensure that your MySQL application and your web application can both get access to those settings. You can add it as a config map and then share it between your applications, share it between different deployments. And config maps could be as complex as a MySQL file, but they could also be much simpler. So one tool that we've really added is the ability, one tool that we've added is the ability to set up a config map that might just have key value pairs. And those key value pairs might represent the parameters that your application uses to run. The downward API allows you to take values out of that config map and inject them into the application as environment variables or as files on disk. And that gives you just a little bit more flexibility to take containers that run normally outside of Kubernetes. And when you bring them to Kubernetes, you can use these different tools to allow you to avoid having to create a lot of boilerplate inside your image. And we call this adaptation. It makes it easy to take applications running off Kubernetes and run them in open chip. Finally, I'd say one of the the biggest requests we've had comes down to, in the real world, deployments are very complex things. And so the easiest kind of deployment might be the rollover, which is something we've supported since OpenShift 1.0, where you run new versions of your application alongside old versions and gradually move traffic from the old instances to the new instances. There is a more sophisticated concept called blue-green deployments. And a blue-green deployment is usually when you set up a whole new set of instances of your application, all your new code, and you set up, you leave all your old code running. You test the new code independently, and then you move all the traffic over from the old to the new. And then there's a really, on top of that, there's a lot of more subtle variations where you don't just want to cut over all at once, or you don't want to cut over on an instance-by-instance switch. But instead, you want to assign weights. And those weights might be to, I want to run a copy of my application that contains the new code, and I want to run it there for 15 minutes, an hour, a day, to allow me to assess whether the percentage of users who are being sent to that new instance end up having a successful experience, whether through metrics or user testing. And so in OpenShift 1.3, we've added the ability to give a route multiple backends. And so just the high-level routes in OpenShift are a way to program a shared load balancer that lets any application that listens on, listens for HTTP or HTTPS traffic, and easily switch between, easily set the host name or the port you want to send traffic to, to set security settings like TLS or TLS or shared sessions. And by allowing a route to point to multiple services, set up just one, a user can set up in this example, you can set up service A, B, and C, service A receives 90% of the traffic, service B receives 10% and service C returns 0%. So as an example, I might set up service C to be the newer copy of my application, and I would gradually increase the amount of traffic running to service C until I'm comfortable as working, and then I might cut service A and service B over to service C by setting them to zero and setting service C to 100%. And so A, B routing can be a very powerful tool in very specific scenarios. With the OpenShift 1.3 release, we feel that we've really spanned the range of what you can do, and what we really want is to gather feedback from developers and from people deploying in production. What are the additional tools that we can build into the deployment platform to make it easy to do testing of applications, roll them out, have the feedback as a developer that you're confident everything is going to work correctly, and then build that into, for instance, a Jenkins pipeline. So these tools, just like all the existing tools in OpenShift, are things that we can enable from the Jenkins pipeline. So you might run automated tests against this, the new version of your code while it is running in production, and then if those tests pass in the metrics from the backend demonstrate success, you can continue to roll the rest of the application out. And again, OpenShift is always about being as flexible as possible to the scenarios that users actually have. While different parts of this may be situational, the underlying goal for OpenShift is to enable all developers to find the point of integration they need. And OpenShift out of the box does the right thing for you, and as you grow in sophistication, you can take those additional steps. A key part of development flows is understanding what images and what content is being deployed in your cluster. And in OpenShift 1.3, there's a number of really exciting improvements that have been made to how OpenShift exposes Docker images. So OpenShift has an integrated Docker registry that's capable of working with other Docker registries. The goal of the OpenShift integrated registry is to make it easy for developers to iterate and then push to production environments. Because it's just a Docker registry, it's also possible to work with something like JFrog Artifactory or a Nexus repository to pull images into OpenShift and then to push them back out once you've completed your testing. OpenShift as of 1.3 has integrated a new web console that allows you to view the content in the OpenShift registry as you might expect to see on a public website that allows you to see all of the content. So on the right hand side of the screen, you can see I'm in the Red Hat project and I'm viewing an image stream that has the latest tag. So it's a Docker image. I can see what it runs and how it runs, as well as seeing the older versions and the person who pushed those images. This is really for when you want to host Docker images that you're producing for others in your organization or your community to consume. On the developer side, on the left hand side of the screen, you can actually see where we've integrated the same sorts of information and capabilities to a more developer focused view. So as a developer, I might be rapidly iterating. I might want to take the image that's checked into the OpenShift registry and pull it and download it to my own laptop to do some testing before I sign up on the application. And so the information about that image, where it comes from, the content, and security settings are also available to me as well. And as we continue to grow the capabilities of the integrated OpenShift registry, it's very important for us to make it impossible for developers to track and manage content, as well as expose that content for others to consume. And one of the great features of the OpenShift registry is that it is set up to aggregate images from many different places. OpenShift 1.2 added the ability to import images from another registry at a periodic interval. And so as a development team, if you want to gather images from a number of different sources from the Docker Hub or from your internal enterprise private registry, OpenShift can kind of be the clue that lets you get access to all of the images and you get to decide which images you want to consume in your local environment. With the OpenShift 1.3 release also continued to refine the flexibility of the OpenShift build pipelines. There's many different ways to build Docker images and there's many different ways to build the artifacts that compose applications. In Java, you might build Warfiles or Enterprise Ear files. You might also publish jars to the Maven Nexus, whereas in Ruby, you might create gems and upload them to RubyGems. And for OpenShift, it's extremely important to be able to offer tools that allow you to compose your Docker images either on or off the platform. So the easiest integration is always to build images and just push them into the OpenShift registry or to tag them in. OpenShift can take those into foil. Because OpenShift is an environment where it's easy to run builds using spare capacity, the OpenShift platform exposes the lower level source build and Docker build concepts that let you easily build images from the source code. With the integration of Jenkins, you can use Jenkins as a tool to take standard Jenkins tools, compose them together, build them on the platform as necessary inside Jenkins workers, and then either choose to use the tools on the platform or to push images directly on your own. As well as providing all of these different mechanisms, with OpenShift 1.3, we're working to document and explain to end users how to build flows that maximize your development workflow, maximize the efficiency of the development team. And that'll get more powerful over time as we build increasing levels of integration in tools like Jenkins to leverage these mechanisms. So even with all of the exciting things that we've done with OpenShift 1.3, there's even more exciting things coming down the road in the next version of Kubernetes, Kubernetes 1.4, as well as Kubernetes 1.5, and the OpenShift releases that'll be based on that. There's a ton of improvements just across the board in making the platform for applications be as stable and flexible as possible. So to an end developer, these may not be details that you care about. From an operational point of view, our job is to make it possible to run some of the more demanding use cases while keeping the end developer experience simple. And so storage has traditionally been a very difficult problem for people running applications. You need to have a disk storage attached to a particular machine. It needs to be backed up. It needs to be tracked, monitored applications. You may need to ask for storage provisioned ahead of time. And with OpenShift 1.4 and 1.5, there's a number of really exciting integrations that will be built in to allow you to dynamically provision storage, whether it's on the cloud from AWS or GCE, or whether it's in a developer environment or on bare metal in a private cloud, to allow you to build integrations that create on demand and share that storage with the application that needs it. So as a developer, instead of needing to define exactly what the settings for the NFS server that I want to connect to are, an application administrator can make NFS storage available. And our goal with dynamic provisioning would be to enable the application developer to ask and get the storage they need as they move from environment to environment. Now this sort of flexibility is certainly possible today. With OpenShift, we want to make it easy out of the box for an application that needs 10 gigabytes of storage of a certain type to get that storage no matter where it runs. And you'll see increasing levels of simplicity and focus on initial setup around making storage be something that really just becomes a detail that an administrator can set up, let run, and nobody on the platform has to worry about it. We also want to make it even easier to get shared storage for developers. Today, there's certainly a number of different ways you can configure storage, but one of the easiest is using the actual resources of the cluster, using the machines on the cluster to give storage to applications as they need it. This works really well for test and development style applications where you may not have a very strong need for an SSD or high IOPS AWS volume, but instead you really just want to get access to a few gigabytes of local storage to do something interesting to get or to download and track a few images from your application. And so adding the capability and Kubernetes to have your applications be able to get small amounts of storage from the cluster on demand will really lower the bar for storage in general. We'll combine that with a number of other improvements to the platform. A lot of this stuff is more targeted towards operational teams, as I mentioned before, but one of the exciting features coming in OpenShift 1.4, OpenShift Origin 1.4 will be scheduled jobs, and scheduled jobs allow you to set up a recurring container that gets run as needed on the cluster. So if you need to run a job every night to generate a report from your database, in OpenShift 1.4, we plan to have the scheduled jobs concept available. The scheduled job is just a container like any other, and so your application code, your scripts can be run at any interval you desire, whether it's nightly, hourly, every five minutes to make the changes you need on the cluster. This lets you kind of have a, if you need to do periodic maintenance on the cluster, you can just back off. With application developments, there's a lot of exciting improvements. The biggest is some of the work going on in Kubernetes right now around service linking and service catalog. As a developer, I may not care about this when I'm working on my own applications, but the more I'm working with other teams and the more that I'm working with environments like OpenShift Online that offer the ability to consume resources that are hosted by other cloud providers, the service catalog is a way to bring all of those together so that people can make it easy to consume a wide range of network and computing services from the cluster. And service linking will be an easy way to, as an application, choose to consume those resources. So I built a simple Ruby on Rails application and I need a database. I might go to the service catalog, ask for a database to be provisioned forming, and then connect that to the application. And the person who's offering that database might perform the administration and management and keep that database running for me and I can focus on making my application work as easily as possible. I've talked a little bit about Petsets and the improvements coming to upstream deployments. There's a lot of tools we've intended to make it easier for developers to integrate with the platform. One of the most hotly requested has been a JSON schema for OpenShift that would allow people to build and generate client tools in a wide variety of languages. We're going to combine that with better support for external clients and external consumers. So it's even easier to script and integrate with OpenShift as part of our ongoing work to improve the performance of the platform. We really want to support. We're planning in the next few releases to move to Etsy D3. Now, Etsy D3 offers some pretty significant improvements over the Etsy D2 release that's part of OpenShift today. That will translate to bigger and more efficient clusters as well as the ability to have longer histories of information available for consumers to application. There's a lot of exciting under the cover details that we'll be able to talk about more over the next few months. With that comes the desire to make it even easier to extend Kubernetes and to plug in new capabilities. There's been a ton of great work in the Kubernetes community to expose things like additional tools for administrators to gather information from the hosts of the cluster and make those available for application sharing and for application optimization. We want to enable new tools like the workflow engine that allows you to build jobs that can have dependencies and prerequisites and be able to integrate that and run that alongside your OpenShift cluster. Those third-party resources which have been worked on for a long time in Kubernetes that make it easier for someone to build a simple integration with OpenShift or Kubernetes to solve a very specific problem. We are getting some final tweaks and we'll finally be something that developers can rely on. At the lower levels, for people who are looking to run in different networking environments, there's going to be some extended work in the container networking interface for even more flexibility in OpenShift. At the developer level, we get a lot of feedback. There are people who have very specific desires and very specific needs, people who pitched in in the community to make some of these things possible. There's a ton of really important steps that we're trying to make as both from the web console and the CLI that make it even easier for developers to understand what's happening with their applications and to go back and troubleshoot. This slide right here just kind of shows you some of the breadth of that support for private Git repos, improved user experience around adding and removing secrets, being able to view historical logs, custom metrics, and access to better logging and more details about logs. We think of this as really an iteration of capabilities that are already available on the platform that can be made even easier to use and offer easy to use, ready to go tools as someone deploying an OpenShift deployment and just deploy the whole cluster and benefit from all of these things all at once. I hope you're as excited for the future direction of OpenShift as I am. We know that there's a ton of work left to do. There's a huge number of people involved in the Docker, Kubernetes, and OpenShift communities that are all working together to try and make all of these tools as fast and efficient, as easy to use, and as useful as they possibly can. We would love to have you join us. Please go to the comments.OpenShift.org, which is the primary place to learn about how you can engage and participate in the community. I want to thank you. So also, we're going to be having an OpenShift Commons gathering in November 7th in Seattle. So if you want to hear more from Clayton and from the other of the upstream project leads across the OpenShift Commons, please check it out at commons.OpenShift.org. You'll find all the information about the gathering there. It's co-located with KubeCon and CloudNativeCon and PrometheusDay. So there's going to be a lot of stuff happening in Seattle that week. And so we hope you'll join us and meet with all of the different members of the OpenShift team and community and get more involved in the whole project and learn more about what we're doing and how we're making Kubernetes and OpenShift work well together. Exactly. And every time someone new contributes to OpenShift into Kubernetes, we all get a little bit stronger. The OpenShift team hangs out on FreeNode on pound OpenShift Dev. And you can always find us on if you have an issue with OpenShift or a feature you'd like to see, please go to github.com.OpenShift.org and let us know what it is that we can do to help you run your applications more effectively. Awesome. Well thanks Clayton. I'm really looking forward to this next release of OpenShift 1.3 and to learning more at the commons events coming up in Seattle. So hopefully we can get a lot more going with the community in the next little while. So thanks again for your time today and we look forward to hearing more about the next couple of releases coming. Thank you very much, Diane.