 And welcome everyone to the breakout sessions for CloudCon Integration and APIs. This is the session for Eric Jacobs, Principal Technical Marketing Manager for OpenShift at Red Hat. And Eric, welcome back to the show. Thanks very much, Vance. We're really glad to have Eric with us this morning. He insures OpenShift continues to meet the needs of cloud-facing enterprises. Right now with a specific focus on ensuring OpenShift container platform simplifies DevOps to accelerate apps for agile applications. And Eric has a great deal of techno expertise, in fact, having served as a Red Hat solution architect for six years before transitioning to his current post. And in his session this morning, containers and DevOps from hype to reality, Eric will tell us the real deal on how Red Hat is bringing capabilities to the OpenShift container orchestration platform to kind of convert it into a past plus that can deliver on some key business benefits for bringing the technologies of container and DevOps readily available to the OpenShift platform. And just a quick reminder, Kevin, how are you done so? Let me recommend you download Eric's slides. They're really great and illustrative of this architecture. And in that same area, you'll see that Eric and his team have brought us some white papers and other valuable downloads. They're available right now without any extra registration required. And we like to make these sessions interactive where we can. So to connect with Eric, just type in this a bit of question box. Now, Eric, let me hand it back to you and tell us about containers and DevOps from hype to reality. Thanks very much, fans. According to right scale, the DevOps adoption rate has increased from 66% in 2015, to 74%. Many organizations are adopting DevOps but not just for new Greenfield projects. They're also transforming their traditional IT operations using DevOps methodologies. Gardner says that up to 20% of their clients will be adopting DevOps for their traditional IT operations in 2020. But why is that? And what's the reason behind all of the hype and the interest in DevOps and these new development and operations methodologies? Based on deployment, configuration and other inconsistencies between applications and application environments, delivering an application into production is usually a non-trivial task that involves a lot of friction back and forth cycles between developers and IT operations. And these painful deployments lead not only to poor quality of the delivered service but also a tendency to try to simply avoid the pain by avoiding deploying to production often. In turn, this results in larger deployments with more features being delivered at one time that ultimately results in a higher risk of things going wrong, which results in more pain and fewer deployments and the story continues on and on. At its core, the problem is that developers have different motivations and concerns than essentially the IT operations group does. The developer's job is to deliver change and to care about new frameworks and architectures and the tools to do so. While IT operations job is to bring stability and care about environment predictability, lifecycle management, security, costs, monitoring and so on and so forth. As a result, while applications are designed and developed in accordance to the developer's mindset, the operations requirements usually end up being layered over it at the end, leading to frustrating frictions at the end of the delivery cycle. Containers provide for a consistent environment and tooling that both developers and operations can use to package, deliver and manage these applications. This can simplify the deployment process, regardless of how the application looks or the framework used. They also provide for a common set of building blocks that can be reused in any stage of development to help enable the recreation of identical environments, whether you are in development, testing, staging, production and essentially extending the idea of write once, deploy anywhere. The concepts for DevOps are not new and have been around since the late 2000s but they didn't really gain popularity because implementing the practices proved to be very difficult with virtual machines and the other technologies available at the time. DevOps has really taken off in the last few years where containers gained popularity and significantly simplified the automation that DevOps advocates. But what are containers? As with many things in IT today, the answer depends on who you ask. From an infrastructure perspective, containers rely on the Linux operating system features like control groups, kernel namespaces and SE Linux in order to isolate slices of the underlying operating system. Each container has libraries from the Linux operating inside of it. Containers are simple to build and run and consume less resources when compared to virtual machines. This enables us to run many containers on the same infrastructure and increase our overall utilization. While the container provides many benefits for infrastructure, there are complexities imposed by the needs for orchestration and management of the containers in order for them to be used effectively. From an application perspective, a container is a way to package the entire application as a self-contained binary artifact. It is easy to package applications that are already built and simple to share those packaged containers across the organization. The developer has full control over the content of the container and can package everything that an application needs for running inside. In the eyes of many inside the organization, containers come to the rescue as the common language between developers and IT operations. Containers can drive a revolutionary change into how these two sides of an IT organization work together. They allow developers and the line of business to own their applications in the form of containers that can be deployed, maintained and operated on top of an existing infrastructure that the operations team provides without compromising security or compliance. The promise of containers is that everybody's needs are fulfilled and these teams can finally work together in better ways than previously they were able to. But what does a deployment pipeline or a deployment workflow look like when using containers? It all begins with the familiar process of a developer committing change to the source code repository. Then the CI-CD continuous integration and continuous deployment engine, in some cases Jenkins or other tools, gets notified, grabs the application code from the source repository and rebuilds the application with this new change. With the introduction of containers, the CI-CD engine at this stage can package the application as a container image which can be deployed into the target environment. The container will run the exact same way regardless if the target environment is physical, virtual, a private or public cloud infrastructure. The container packaging decouples the application from the infrastructure and enables portability throughout the delivery cycle. Docker Hub, a public container registry, provides statistics that show there is huge interest around Docker-formatted containers. Virtually every team is building and deploying Docker-formatted containers for some purpose. However, despite the huge interest in Docker-formatted containers and the large number of images that are built, only 18% of them end up in production. Why is there such a huge difference in the number of containers that are pushed to the registry and the number of containers that end up in production? Why do so many organizations start adopting Docker containers but fail to take them all the way? The reason is that it's difficult. Building and deploying containers on a single machine is easy. However, you need to have answers for a number of questions in order to be able to utilize and deploy containers across multiple hosts in a production organization. In order to deploy a container into your infrastructure, you need to answer several questions first. Where do you store the image so that it's accessible to the hosts? Which container image should be deployed on which host? Which host in the environment has the most capacity? How do we monitor the health of running containers? And what do we do if they have crashed? How do we scale out or scale up our containers running in our infrastructure? To further complicate the matter, real world applications rarely consist of a single component. Even monolithic traditional applications have a database, a cache, and a number of other components. In order to deploy a multi-container application, there are even more questions to answer. Which containers should be deployed together? Which containers can access each other? How do we limit access to certain containers? How do we control what's running inside of a container? How can containers find and discover one another? How do we scale containers across multiple hosts, multiple racks, multiple regions? What do we do about persistent storage if the container needs to store stateful information like a database? In order to streamline the deployment of containers into your infrastructure, a solution is needed that assists not only with complex deployment scenarios, but also managing these container-ized applications and their components. We need more than just containers. Building and running single containers on a single host is fairly simple, but building and running multi-container applications across multiple infrastructure components requires more than just the container itself. Container solutions generally deal in four areas. The container host, a lean operating system that is optimized for running the containers. This would be similar to a virtualization hypervisor in a traditional IT infrastructure. The container platform, a platform that helps with building, deploying, orchestrating containers on infrastructure regardless of whether it is physical, virtual, private or public cloud. Container management, a management solution that targets the operational aspect of containers, allowing you to manage capacity and have that quote unquote single pane of glass view across the infrastructure, the containers, the virtual machines that they may run inside of, your hypervisors, and how everything is connected. The management solution should also enable policy and security management across these deployed containerized applications. Lastly, container storage. A storage solution is required that provides storage for applications running in containers. This is especially important for traditional stateful applications like databases and other solutions that need to write data somewhere that it will be available for it. When it comes to the Red Hat portfolio, Red Hat's containerized solutions cover all these areas via the operating system of Red Hat Enterprise Linux, Atomic Host, the container orchestration platform called OpenShift, the container management and infrastructure management solution called CloudForms and Red Hat Gluster Storage. Let's take a moment to zoom in a bit on OpenShift, Red Hat's container orchestration platform. OpenShift is Red Hat's container orchestration platform that utilizes Docker and Kubernetes, bringing in all of the pieces required for building, running and deploying containers at scale. OpenShift container platform is the leading enterprise distribution of Kubernetes optimized for continuous application development and multi-tenant deployment. OpenShift adds developer and operational-centric tools to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for teams and applications. Red Hat is a leading contributor to the upstream Docker and Kubernetes projects. OpenShift is built on top of and utilizing industry standard technologies, the Docker container format and the Kubernetes orchestration framework. Red Hat is a leading contributor to both projects along with Docker Incorporated and Google. Docker containers take advantage of many of the core Linux kernel technologies and Red Hat's deep experience and expertise with Linux provided a solid ground for building the container platform based on Docker containers. Kubernetes is born at a 15 plus years of experience running containers at Google via their Borg and Omega projects. And it sees huge traction and convergence in the industry as the container orchestration framework on which others build their solutions. Many well-known vendors like Red Hat, Google, Intel, CoreOS, EngineYard, Apprenda, Apsara, VMware, Canonical and even Microsoft have adopted Kubernetes and their solutions, sometimes at the cost of replacing their own proprietary orchestration technologies. Red Hat collaborates with others to drive innovation in both the Docker and Kubernetes communities and makes that innovation stable and consumable for the enterprise through OpenShift container platform. OpenShift builds on top of the industry standard Red Hat Enterprise Linux as the foundation for running enterprise class containers. On top of the Red Hat Enterprise Linux foundation, OpenShift provides container orchestration, scheduling, persistent storage, life cycle management, operational management and other container infrastructure services required for using containers at scale in production. This group of some services is sometimes called container as a service. OpenShift does not limit itself to container as a service but also provides application services like lightweight application platforms, message brokers, single sign-on, distributed in memory caches and other middleware solutions in addition to image build automation and CI CD pipelines allowing for developers to take advantage of containers using the same tools and frameworks they already use today. This layer of service is usually referred to as platform as a service or PAS. With OpenShift, you don't have to choose between container as a service and platform as a service. OpenShift allows you to pick the right services for the various applications you're deploying and provides choice for the development and operation teams to use the platform the way it fits best. Going back to the deployment pipeline scenario from earlier, OpenShift helps solve the challenge of deploying and managing the containerized application after it is built by the CI CD process. Based on the blueprint of the application, OpenShift can orchestrate the application components across various parts of the infrastructure, scales them up, manages their health and defines suitable security policies. Adding storage and operational management solutions from Red Hat, Gluster Storage enables hyperconverged storage that can itself run inside containers in the infrastructure and provide persistent storage for the containerized applications. CloudForms brings visibility to the entire infrastructure from the hardware all the way up to the containers and applications inside. OpenShift has more than 200 customers worldwide across a broad set of industries and use cases. Here are just a few examples of who is implementing DevOps with containers on OpenShift. As an example, a financial services customer was able to reduce their deployment time from weeks to just days utilizing Red Hat OpenShift. They were able to build a push button developer stack based on a platform as a service architecture and fully integrated it with their CI CD processes to establish a DevOps workflow that ultimately streamlined their application delivery. Utilizing various Red Hat products and services, this company was able to reduce their deployment time from weeks to days, improve their developer efficiency and provide for more robust production deployments. As an example in a different industry, a leading health insurance provider was running the risk of missing ACA mandated deadlines due to long application delivery cycles. By using Red Hat OpenShift, they were able to create and deliver an architecture that enabled them to build, test, and deploy new microservices-based applications using a DevOps methodology. They were able to reduce their production delivery cycle from over nine months to merely three weeks, which ultimately reduced their time to market from idea to delivery. By providing an on-demand infrastructure, they were able to increase their operational efficiency and ultimately reduce their operational costs. Red Hat's Open Innovation Labs provide a focused and intimate teaming engagement that helps companies accelerate their innovation. Red Hat provides people, process, and technology instruction that most businesses require to meet their modern challenges. The Red Hat Innovation Labs is a platform to help customers bring their innovative ideas to market more quickly. Customers can learn how to build and containerize applications using OpenShift while working closely with Red Hat subject matter experts to understand and integrate DevOps approaches and methodologies to be able to accelerate both innovation, development, and digital transformation now and into the future. At the close of an innovation labs engagement, organizations leave with the skills and the community-powered tools and methodologies to both drive and sustain innovation. We hope you've enjoyed learning about how Red Hat thinks that OpenShift can help enable both utilization of containers and adoption of DevOps methodologies. This concludes the formal portion of this presentation, but I'll hand it back over to Vance to see if there are any questions. Eric, thank you very much. A really great look at this whole shift towards DevOps and how OpenShift has really done a lot to anticipate a lot of the needs that people have found required. It's often hard for them to do on their own. Really great session. Thanks very much. You're very welcome. Eric, as you might expect, a couple of kind of business-level, app-level-type questions as well as technical ones. Let's get into a little bit about the developer side of things. You mentioned at one point in your session that you've architected OpenShift so that you're gonna be very developer-friendly. They can bring tools with them that they may already use. Here's a question about that. We currently use Jenkins Node.js in Spring across many of our development departments in the company. Are we able to use all of these and others we're considering? Absolutely, Vance. We'll start with Jenkins first. Many organizations already have adopted Jenkins to some degree, and using Jenkins with an OpenShift deployment is absolutely doable and is very easy. Red Hat provides and supports a plugin for Jenkins that enables Jenkins to integrate seamlessly with an OpenShift environment to guide both container builds as well as container deployments in the context of their existing continuous integration and continuous deployment processes. When it comes to languages and frameworks like Node.js in Spring, Red Hat makes it very easy to consume and continue to use these on the OpenShift container platform. Through the Software Collections Library, Red Hat provides a number of build automation-enabled Docker images for popular languages and frameworks like Ruby, Python, Perl, PHP, Node.js, Java, and so on and so forth. By using these pre-built images or by building your own base images, it can be very easy to essentially combine source code with these existing images to produce a containerized application utilizing that existing CI infrastructure with Jenkins and OpenShift. Fantastic. You probably get this question all the time. It's kind of the learning curve question that we're starting to hear a lot about in this migration to these new architectures. And it simply says, we've been using the cloud for quite some time now, but we're frankly new to containers. Does Red Hat or OpenShift offer some sort of guidance or templates that we could use to figure out how to begin designing apps through containers? Absolutely. So for almost all of the common languages and frameworks, we provide pre-built and example applications and application templates. For example, Ruby on Rails with the Mongo database as a predefined application template with an example application that a developer can simply clone the repository for and start to build out what they wanna do using those pre-built templates and whatnot. In addition to that, we have lots of documentation and tutorials as well as a good services bench that can help analyze the existing development flows that organizations are using and help them translate that into sort of this new containerized world. When it comes specifically to the cloud, because OpenShift is just built on top of Red Hat Enterprise Linux, OpenShift is supported anywhere that Red Hat Enterprise Linux runs. So public clouds like Amazon or Google or Microsoft Azure are all places where the OpenShift container platform can be run or even run across different clouds. And the benefit to this is that it provides abstraction from the cloud infrastructure to help avoid cloud lock-in while providing a consistent platform and interface for development and operations teams to use. Great, great. We like to let attendees learn from others where we can, where that's possible, Eric. And here's a question that talks about the great list of adopters that you showed. And it simply says, this is a very impressive list of OpenShift users. Can the speaker share any detail about the type of application, whether vertical, optimized or transactional that would help us make our own selection on where we could get started? Yeah, absolutely. So especially in the financial services industry, we have customers who are very concerned with high performance applications. There is a lot of enterprise Java as well as these new lightweight Java frameworks like Spring that are being deployed. We have customers who are deploying existing traditional monolithic applications that were built in sort of old school, if you will, languages like C and C++ into the platform. And really, the platform itself simply provides for a way to help make it easier to run applications. We can almost ignore the fact that it uses containers and simply think of it as a deployment orchestration platform. If I have some software that I can package conveniently to run on Linux, I can put it into a container and I can potentially orchestrate and deploy it and manage it using a platform like OpenShift. You know, Eric, another part of getting applications launched quickly and updated even more quickly is the idea of trying to take the human element out as much as possible. And so this question goes to automation. And it simply says how much of the OpenShift capability would let us speed traditional SDLC so that we can actually automate some of the tasks that are very time consuming or error prone? Almost all of it, Vance. Really, the core design philosophy around OpenShift and Kubernetes is essentially streamlining deployment. And because of the use of containers and the way that containers can be monitored, if you will, for change in the registry that holds them, it can be very, very, very easy to fully automate deployment, including more complicated deployment scenarios like AB and blue, green and canary and things of that nature. Some of these deployment scenarios are actually built right into OpenShift today and we're working to enable more and more of them in a fully automated fashion with a very simple configuration. However, with some very limited sort of scripting and integration with CI solutions like Jenkins, it can be possible to implement just about any deployment scenario you can envision without having to have really any manual steps other than the approver hitting the button to allow the process to continue to the next phase. And we have a number of examples of things like simple dev to production pipelines that can be viewed both from inside the OpenShift interface as well as in the CI solution itself. You know, that's such a rich answer. I'd like to just take a moment. The way you phrased OpenShift as a container orchestration platform for folks that may not have heard those two words used together, they certainly know containers and they know orchestration. Maybe you can shed a little bit of light of what you're thinking was and what kind of real benefits you get by taking that container orchestration approach to OpenShift. Sure, I don't wanna suggest that containers are a replacement for virtualization, but if we draw an analogy to virtualization, which is something that most organizations are not only familiar with but have heavily adopted, OpenShift as a container orchestration platform is analogous to the virtualization platform and the virtualization management solution. So when it comes to virtual machines and somebody coming in in a self-service way, perhaps requesting a new VM be deployed, they don't care in many cases or even know ultimately where that virtual machine is going to end up. The management layer, the management solution is essentially orchestrating the deployment of that thing. And this is analogous in a container environment with something like OpenShift container platform. Somebody comes in in a self-service way, whether that's a developer or even a production operator and makes a request for a workload to be run. If no other sort of additional restrictions are included in that request, OpenShift looks at the existing environment, figures out the best place for that workload to be deployed and tells the system to run a container in that place. And depending on how the health checks and other things are defined, OpenShift will continue to manage and quote-unquote monitor that running instance as long as somebody asks for it to be running. The other thing is that when you're dealing with a container on a single host, it's not very difficult to use that or get it running. But as soon as you introduce multiple hosts, you have this problem of how do I make sure that two containers on different hosts can talk to one another over the network? How do I make sure that some real world storage that exists outside the OpenShift environment is always connected to this application, no matter where it ends up, if it gets moved around due to problems or health check issues or maintenance or whatever it is. How do I get external requests from outside the platform into the platform to reach these containers? All these things are sort of core features of an orchestration platform that are needed. And in many cases are sort of value ads that Red Hat has brought to the Kubernetes project. So when you adopt something like OpenShift, you're getting the complete set of requirements really to be able to utilize containers at scale in an environment across the entire SDLC. Eric, fantastic, fantastic comments here. In fact, as I see we're running close on time, a question comment comes in on just that very topic and the question simply says, is there a way that we can take a free trial or even create a sandbox with OpenShift? Absolutely, Vance. We're going to provide a number of links and documents alongside this presentation and webinar, but if people just go ahead and visit OpenShift.com, they can find out about the number of different ways they have to evaluate and try OpenShift. We have a hosted online free environment that's in a preview state right now where people can sign up and then get access for 30 days or so before the environment gets reset and they can play around with some of the features and functions of OpenShift. We have ways to evaluate OpenShift on public clouds. We can provide evaluations and trials on-premise. So really it's just a matter of somebody reaching out to us and contacting us and saying, hey, we'd like to try. We'll figure out what the best option for them is and help them move forward. Eric Jacobs, Principal Technical Marketing Manager at Red Hat for OpenShift. Thanks very much for a great session and a really great tour of the capabilities that Red Hat has put in plays to just really power this whole new migration to DevOps. A really great session. You're very welcome. Thanks so much for having me. And as we love to do here at the CloudCon Integration and API, here is a slide that summarizes many of the other links that Eric mentioned, the resources, including that great section on being able to play with OpenShift. Thanks again, everyone.