 Hi, everyone, and welcome to the webinar. Integrated PAS platforms allow DevOps to drive innovation. Before we get started with today's presentation, there are a few items to quickly mention. You should see a taskbar at the bottom of your screen. Each icon is assigned to a particular element of today's webinar. If you're not sure what the icon does, hover over the icon with your mouse, and a box will appear to tell you the function. Also, below the slides window, you should see a blank ask question box that allows you to type a question. After you type the question, click Submit to send it to the presenters. Feel free to submit your question throughout the webinar, and our presenters will address as many as possible following the presentation. You can submit any technical questions related to the webinar platform here. Please close down other browser windows or applications that might be splitting your bandwidth, including VPNs, as these might interfere with the audio or video stream. If you experience any connectivity issues, please refresh your browser. Today's session is being recorded, and all registrants will receive an email within one to two days of the event with a link to view this presentation on demand. So now I'm gonna hand it over to today's speakers, Aaron and Mike. Great, thank you, Nick. So welcome, everyone. Today's webinar is one in a series that Red Hat is putting on DevOps. I think we're right in the middle of the series. We've had two before this one, and we'll have two after it. I believe the one right before was with our partner ThoughtWorks when we were talking about the agile development process. So this flows very nicely into this example. During this talk, we're completely honored to have Paychex join us. Aaron and I will take you through how Paychex is leveraging OpenShift, and in particular how it's integrating OpenShift into an existing DevOps tool chain. And this is an issue that's faced by a lot of our customers, so we are very grateful to Paychex for coming out and talking about it with our customers. As Nick mentioned, my name is Mike Barrett. I'm a product manager here on the OpenShift team. I specifically look at the on-premise solution, so I work a lot with our enterprise and data center customers. I walk them through architectures, integrations, and drive features and roadmap into the product set. I have with me Aaron Schaffer. Aaron, why don't you introduce yourself? Thanks, Mike. Hi, everyone, I'm Aaron Schaffer. I'm with an IT ops here at Paychex. I'm in the Unix engineering manager. Excellent. So today we have a great agenda put together for you. We're gonna cover who is Paychex. It's good to know what industry they're in, what problems they face. We're gonna cover what challenges they were specifically going after when they started to market around for a platform has a service for integration into their tool chain. When you look at Paychex, I think you'll find like I found, they are a classic company that really leverages the greatest and latest in the technologies. Being with one foot in financial services and one foot in human resources, they really need the mission critical part of the technology to totally be there. But they also need a really cutting edge user experience and a mix of cutting edge technologies for the human resource side. So they're really forced to invest into that technology and they have some great solutions out to market. So if you're interested in working with a great thought leader in this area, definitely seek out Paychex. Then Aaron's gonna go into how he actually solved a problem and the problem that we're going after is a very heterogeneous one. We're taking a Oracle WebLogic and a Spring application on top of it and how that looks running on OpenShift and how that integration then flows into the existing CI and tool chain. Then we're gonna talk about the benefits of the integration and then go into some Q&A. So with that, I'm gonna hand it over to Aaron. He's gonna take us through Paychex. Thanks, Paychex was founded in 1971, open focusing on payroll processing for small businesses. We're headquartered in Rochester, New York and a publicly traded company on NASDAQ. We're included in an SMP, mid cap 400 with over 2.3 billion in revenue. We offer a wide range of services, including payroll processing, retirement services, insurance and a fully outsourced human resource solution. Paychex customizes the offering to the client's businesses whether it's small or large, simpler or complex. Again, we're a fully outsourced human resource systems for clients is what we offer. We have a high level of platform customizations for clients across all services that we provide, cutting edge platform components and we offer mission critical application delivery and performance. Some of the challenges that we face as most organizations do are delivering technology and processes. We're always challenged with finding the best way to deliver dynamic environment builds considering infrastructure challenges, configuration management, consistency and configurations, code delivery and data sources. We dynamic variables and environment dependent. We have to have considerations around JVM heap sizes, the infrastructure including configuration of that infrastructure and application configurations. Consistent feature enhancements to production are always at the forefront, focus on minimal functionality required for production, unintended automated release process and modified components of an application are released. This is a challenge to understand whether we spin up, spin down the full infrastructure or just components that are modified and what best promotes consistency for the development environment. Requirement to quickly return to known state, co-testing and recovery from production issues. Continuous testing, testing for regression performance availability and test for functionality, declare release validation and automated testing. So these challenges while we consistently look to improve we found some performance and some solutions within our past solution to meet the majority of these items and move us forward in our objectives. Awesome, thanks Aaron. So I'm gonna talk about OpenShift but before I do, I mean that's a, you said a lot there, a publicly traded company over two billion in revenue. You know, getting into financial services, I don't know about you, but I missed a paycheck once in my life and I was pretty upset about it. And then Human Resources, I've been at companies where I've taken like two hours to pick my annual benefit plan. So definitely a lot of pressure on making a user-friendly experience in pretty much a mobile market. I wanted to feel and look like my iPhone, like working with my television. So a lot of pressures on you to definitely deliver. And these topics come up a lot across a lot of other companies. Dynamic variables, constant feature enhancements to production, a quickly return to known state, a continuous testing flow. These are all the mantra for a cloud application. These are all what we're striving to with a lot of our other customers. So it's a great example of what we've been doing in the last year. As we look at what a pass is, it definitely has these qualities that you see on your slide. But something great has happened in the industry. We've decided that we've gotten really good at five, nine architectures. We've gotten really good at standing up application stacks on top of that. Now we're willing to look at how we insert change into those environments. We're willing to push more cutting edge technologies into production. And we're only able to do that if we're able to slap down an agnostic layer that is polyglot that covers a lot of the application run times and services, but allows me to connect that to my continuous integration delivery and code management tool chain where I can have a repeatability and success in having change go into production and roll back very quickly if possible. So self-service is there. Automation with CI and CD. This is open shifts out of the box, understanding of the get, knowing what and how to add Jenkins, knowing how to execute Maven, knowing how to solve Ruby gem dependencies, knowing how to integrate with Eclipse-based IDEs and IntelliJ. So these are all those topics coming together out of the box for us. This is auto-scaling, right? This is not suddenly having an ability to have 200 copies of my application, but having an ability to stand up and application in a very known and defined way and have the automation of adding that cluster technology, whatever that technology might be, to have more nodes in that application, and have that based on something like CPU or memory or HCP request, and have that also shrink down automatically for me, and have that idle for me to hold the URL, allow people to hit that URL and bring that application back up and serve it out. So that's all part of this past solution. Built-on security, I'll get into the nuts and bolts of the solution on the next slide, but we do spend a lot of time with our SC Linux implementation. I really don't know of any other passbender that's going through the heroics that we are in automating the SC Linux for you in your multi-tendency. And then built-on rel, and this is important, because we find in the nooks and crannies of the data centers across the world that rel is playing an extremely important part, and we wanna make sure that we were based on that foundation and building forward on it. So when we look at the nuts and bolts, it's good to have a shared vocabulary as we talk about the technologies, and some of the words that we'll throw out at you are these four words that I'll cover now. The first is node, and a node is simply an operating system. It's rel in our case, in this version rel six, and with the node, it can be a physical box, it can be a virtual machine, and we're just looking for that abstraction layer. We are totally isolated from the infrastructure. We have those infrastructure services sort of consumed through plugins to DNS, or authentication, those types of things. Very easy to add nodes into the environment to add more CPU or memory. We have concepts of regions and zones. If you wanna partition things out and make sure you have dedicated infrastructure or not dedicated infrastructure, a lot of choices at that layer. Then on top of the node, we run what we call gear, and a gear is our Linux container. It's built on SE Linux. It's got a file system namespace activated so we can poly instantiate based on SE Linux mandatory access controls. It allows us to have a user log in and automate this ability to have mandatory access controls attached to that user ID, attached to his processes, and attached to his file systems. And so we're really at that C2 security level on what we're capable to do with isolation. At the same time, there's resources involved. We gotta make sure that he's not consuming too much CPU and not consuming too much memory. We're using C groups. And there's quite a bit of intelligence built into our C group algorithms. You can imagine that when an application starts up, it's beneficial to allow that application to spike on the startup and then bring them back down to a constrained CPU and memory environment. So we have that logic based on which payload he's selecting. We also have classic disc quotas involved so we can keep track of the storage and shared storage that that particular tenant is consuming. So that's the cartridge. He sits on a node. Talks out to a broker. We have this broker node. He's a Ruby engine. He's basically keeping track of the environment, taking requests in, feeding them out. He's got some supporting nodes under him. He's got persistent storage, MongoDB. He's got an active MQ or messaging bus. We're using mCollective to share facts about the nodes. So there's a lot of centralization of tasks done at that broker level. The requests come into the broker. We have a web console. We have a command line and we have a REST API. You can come into the broker through those. The IDEs out there in the world, for example, use the REST API. We give you a command line called RHC. He's built on top of the REST APIs. And then we have that web browser that you can take advantage of. So the developer just sits at his laptop. He works on his code and his environment and what he's comfortable doing. And that pushes out automatically through the broker on creation or directly to his application environment post-creation. So it's a nice solution that we have there. So those are the words that you just have to keep in mind, the node, the gear, and now the cartridge. So a cartridge is how we give the gear a personality, how we tell it that, hey, you're going to be a EAP application server, or you're going to be a MySQL, or you're going to be a PHP, or you're going to be a Node.js. These are the workloads that we're placing inside of the gear concept itself. The cartridge specification is very open. We've learned from others, the states and the industry. We did not limit it to only being stateless. We did not limit it to only running web tier applications. We did not limit it to very specific protocols. So you have a lot of freedom in what you're able to run on the platform. You can still make choices to have what we call a SAS cartridge, to call to a service that lives off of the past. We find that to be a common use case. For example, Oracle databases that are purchased on dedicated hardware. They don't want to move, maybe they're owned by a different organization within the company. They don't have to move. You just write a shim cartridge, the same exact user flow. User selects the cartridge. He doesn't know it lives off the pads. It's the same use case for us. Now these cartridges have to be able to automatically bind to each other. We use environmental variables quite a bit to accomplish that. But the magic here is I have a choice. As an application or a platform designer, I can say this application is made up of PHP, MySQL, and some other components. And I can bake that together as my application. And then when people want to deploy it, they can deploy my application and get all those layers. Or I can piecemeal it. I can decide I'm going to be working with EAP for a while. And then nowhere, months later, I can just go ahead and add Postgres, MySQL, or MongoDB. That's totally capable. That will auto bind. I don't have to know how to make these. The cartridge writer is doing that automation for me. So it's a great way to really expand your use of technologies to get your developers to taste a lot of the different runtimes and frameworks out there. So that's the technology. Again, the node, the gear, the cartridge in the gear, the brokers, the brains. And you can interact with the environment as you would through Git or your IDEs or the web console. Now, how does that lend itself to DevOps? Why do we constantly talk about a platform as a service in the same breadth as DevOps? Well, when you look at platform as a services, in many ways, we were trying to figure out what part of the systems, through the cartridge specification and through the gear, you needed to lock down to allow the ops side to not spend a lot of resources fixing what the developers are wanting to get on and change. And at the same time, you want to give the developers the freedom to get on and really experience the platform in the ways they want to without causing heartburn to the operations. And so platform as a service has definitely struggled with finding a balance between the two. And as we deploy OpenShift, we can definitely see that we automatically get the standardization of operating systems. There's no other way to get a gear. You have gear profiles, which will explain their small, medium, and large, their resource utilization. And cartridges are very well defined on what files are writable, which ones are not, and how they come up configured and how they allow people to add environmental variables to them. You also get into code as configuration. And this is the puppet, the chef, the ansibles that really drive our data center operations today. We have a massive influx in the last five years of people really investing in those. Those will still be there for definitely parts of the data center, but where we alleviate the need and bake that into the platform is where we connect the CV and CI and the code flow into the production applications and how that application's lifecycle, how it stops, how it adds another node to itself, how it maybe killhubs, how it idles. These are all definitions that we can capture the characteristics of and really bake into that platform in that configuration. Self-service is there. The integration with the Git, the Maven, the dependencies, the IDEs that we talked about before definitely come up with the DevOps conversations. The just-in-time delivery, not having a predefined application up for people to experience, but building that very quickly on the fly and having that enabled now with the use of Linux containers. That's also a huge thing for us. Action hooks, right? You know, no matter how fast you want this thing to go, no matter how self-service you want it to go, there's gonna be times where you wanted to call into a configuration management database or get into an ITIL or get into some sort of change control process. Maybe it has to trigger another event to occur within the environment. We wanna make sure we have the hooks in the right place to allow all these things to happen. And we definitely offer that. And then without saying the auto scale is definitely important in the DevOps story, we wanna take the more classic use cases of things growing and things contracting. And we wanna bake that into the platform and save operations times. So this is why Platform as a Service and OpenShift in particular really resonates well with those DevOps stories. In terms of benefits, we have a lot of customers come to us and say, look, we have an initiative this quarter, or maybe in the next two years we're gonna spend X amount of dollars solving these problems. These are higher level organizational goals and they're really being triggered or pushed by a lot of the line of businesses. A lot of the shadow IT of people investing outside of the company in public clouds faster than the central IT could. They wanna bring them in, they wanna rub them back in, they wanna make sure that they're offering the line of businesses the services that they want that really help the company generate that revenue. And so OpenShift gets connected to a lot of those conversations. And a lot of those organization goals are satisfied through some of the benefits and features that we offer. The agility really comes around people being able to have that same cost profile, that same I'm gonna spend this much money and I'm gonna be able to offer an application web server and Node.js, a PHP in a very defined way in a very confined way. And people have really been able to pipe that into their code projects at a more rapid pace once we've stood that up for them. So they have a lot more agility in what they're able to deliver. Huge consolidations, this is just, most customers after they've deployed they'll write back to us a couple months later saying, I really cannot believe how much workload that I can put on an operating system. And this really comes from the ability to idle. The ability, no developer really understands how long he's going to need an environment if he's innovating, if he's really, truly pushing the envelope on what the technologies can do and what he can bring to an application. He may not know, right? And so we want him to not be worried about that and not worry about giving resources back but allowing things just to remain out there idle. And if they do need to be used again, we bring them up very quickly. So we see people really taking advantage of different rates of idle and really consolidating a lot of workload into these multi-tenancy environments through these Linux containers. The reduced risk is really around wanting to have that constant path to production of new code changes but only wanting to allow that if we can have an integration with the corporate standards, if we can have a call out into an ITL process, if we can have a repeatability, if we can have a rollback, we have a concept of deployment IDs. And if we can have a different variety of how we deploy, maybe I want to deploy source code, maybe I want to deploy binaries only. Maybe I want to deploy but not execute out on the application. So we have a variety of different deploy verbs or actions that really help us control risk and still have that path to innovation. So let's get into the solution and what it looks like. Red Hat and Paychecks have been working together to really improve a process that was in-house at Paychecks. And Aaron's going to take us through very detailed what is going on with a solution that's based on web logic in Spring. So Aaron, why don't you go ahead and take us through that. Thanks, Mike. Yeah, as Mike stated, we're looking for a solution for this particular use case for a reproducible, self-service, consistent environment to improve developer velocity. So we used, in this solution, IBM RTC, Lasting Stash, Jenkins, Excel Deploy, OpenShift, and their IDEs. So the workflow starts with the developer committing code into RTC. Jenkins is set up to pull RTC at a set interval looking for commits. Once it sees a change, it'll grab it and perform the build. So Jenkins will do the build if it's a successful build. It would push it automatically to deploy it where the code will be staged. At that point, the developer has a second action which is to create a web logic cartridge if they choose not to use an existing cartridge that they've already spun up. That cartridge will use get and stash to grab standard ITOps, standard configuration files and build scripts from using get and stash and build that cartridge according to our standards. At that point, the cartridge would make an HTTP call to deploy it and register with, if it was a new cartridge, register with deploy it and then initiate the code release. At that point, the Oracle web logic admin console will be available if required or else the developer could just test. So right here, we have the RTC view. This is where the developer would check in and commit their code. Once the code was committed, again, Jenkins would perform the build and upon successful build, it would push it and stage the code to Excel deploy which is the screen right here. At this point, the code is ready to go and the developer would have their second task which would be create a web logic cartridge. In our case, we have a 12C option and 11G. Again, if the developer had already created a cartridge and did not require a new cartridge, they could just use their existing cartridge and they would not have to have this step at all. And we would use get and stash in this automated process to pull down the WSD scripts and that way we would ensure that the developer was getting a web logic environment that met the ITF standards and was consistent and reproducible. Again, the new cartridge would reach out to Excel deploy, register automatically and also initiate that release to the admin and managed servers. At this point, the developer couldn't go on to the web logic admin console, make any changes required. If there was no changes required, they could just hit their exposed service and that's what you see here. So to summarize, the developer would commit the code and the developer would create the new web logic cartridge and then they were able to test if they wanted to use an existing cartridge that they already created. The only steps required from the developers would be to commit their code to RTC and the rest would be an automated process then they could just hit their exposed service. So what we found is the solution improved the developer velocity. It was consistency, which is a concern when providing environments that were pre-development and development environments. We wanted to enable developers to make changes to give them the access they need but then in a consistent manner that met the standards so that they knew that their code when pushed to other environments, pre-production would react the same way as it did in this development environment. The utilization of the resources, the hardware resources were very beneficial and the delivery time as far as empowering the developers to do what they need to do without requiring IT operations engagement and in time to deliver to the developers was a huge benefit as well. Great. That's excellent. Thanks Sharon. So it's an impressive balance that you have there between a corporate workflow protecting the code that the developers are producing, matching that to bug counts and line of code situations and really still achieving a very agile, innovative workflow into whatever environment you want to hit. So that was really good to see. It plays out in a number of different ways when we survey our customers. Some customers have that existing workflow, that existing investment that they totally want to hold onto and that's where all these different verb tenses to the RIT to fully command how applications can get to the platforms on OpenShift really come into play. But in the same breath, we have customers that totally are looking for a solution in that area as well. They want to use the native Git found in OpenShift. They want to push from that OpenShift to the OpenShift supply Jenkins and so on and so forth. So you can totally go in either direction. The most common that we found are people are mixing the two and mixing the two, they're solving what's called the laptop problem. A lot of companies out there have their developers doing the unit tests on their local workstation. The problem there is they're finding it doesn't mimic very well what is out in production and that's kind of forcing that situation because production has a higher cost to the line of business. And so if we can make the cost the same, if we can use containers, if we can use a platform to allow that laptop innovation to actually happen out in an environment that is exactly duplicate to production, we find that when we do go to production, eventually that we do have a higher match of things just working. And with that, we have the developers really innovating on early cycles with the platform then every week or every month, whatever the code push commitment back to the corporate repositories are, then the get flows into that other direction and flows in a very similar and classic path that Aaron just took everybody through. So those are kind of the mixes that we see out there in the environment and it was good, definitely good to have somebody walk the community through how they executed on it. So when we're looking at this slide, this is Gardner in their last magic quadrant placing OpenShift in that leaders quadrant. And we find a lot of people place OpenShift very high. If you go to openshift.com slash awards, you'll see a number of resource analysts and other people have voted OpenShift a leader in this community of products in this market sector, lots of accolades. So as we talk to them and we ask them why they put out their reports, the common theme that comes back kind of flows into four buckets. One of them is on the developer experience. We're coming out of the box more than most with an understanding of Git and Jenkins and having a variety of code push mechanisms, those verb tenses that are very, very deeply connected to how the platform does lifecycle changes to the platform, how it adds resources, how it works with those applications running on it. So that's one bucket that people give us accolades in. The other one is around the services and frameworks that we're supported in running. We're very vendor neutral here in that we're mimicking what REL has really provided over the last decade. REL itself has been a foundation for many open source projects that have grown up into what is now the building blocks of cloud 2.0 application framework and run times. And Node.js, the Ruby, the BHP's, all these great things. And we're one of the only vendors that can with our subscription to our platform support our platform, support our operating system that we're providing and support the content being run on that. And that's a great position to be in to be able to offer bug fixeradas and CVEs to our customers across that larger spectrum. A lot of our companies that work with us have mandates that say they can't introduce a new technology to get into the polyglots unless they can provide a fix to a CVE within seven working days. Other industries are 30 working days. You know, it is costly to establish a mechanism yourself to go out and pull those and bring those in. If you just had a simple subscription that was based on deployment technology such as YUM, it really accelerates and saves a lot of cost there. And OpenShift provides that. So we get high marks for services and frameworks. At the same time, on that same topic, the cartridge specification opens up a large variety of doors for us. It really isn't like some of the other concepts out there that force a only ACP front end. You're pretty much allowed the freedoms to do a lot with our cartridges. You can put in those web front ends. You can put in persistent storage-based applications. You can put in just about whatever you want. We're getting in pretty excited about this introduction of XPAS and our Fuse cartridge, which opens up even more protocols outside of HGP on TCP. So we're getting into a lot of the classic protocols that you'll see in a messaging system endpoint topology. All on the paths. And that doesn't negate your choice to run things off paths. You totally have that freedom. It doesn't negate the fact that you can run in a stateless manner. You have that freedom. We're just offering you even more freedoms and even more features past those. And that's been picked up quite a bit by the people who are examined in our platform. The other side of this is the container itself. We are an operating system company and we do work quite a bit with our kernel team in figuring out what is the best way to get into context switching and UTEC shared pages. All these things that come up as you force more and more and more dense workloads and a different variety of workloads and how they will end up hitting the CPU clicks and clocks. So we have quite a bit of research in this area and we're really pushing the envelope and expanding that as we move into REL7 and Docker. So people seem to like our position on containers. They like our investment in SC Linux. They like how we're automating quite a bit of the C groups and changing the C group profiles on the fly. They like these characteristics of what we're offering there. Then on the platform, we've made a pretty bold choice here and not providing a utility that only works in one way. So we have OO install which automates the installation of the OpenTrip platform. In our next release, it will even do more for the platform, which is exciting for us. But we put out a lot of research and investments in Puppet, Enansible, and Chef. We want to make sure that OpenShift had some predefined knowledge out there in the community and in our documentation on how you can easily deploy it with your existing tool set. There was a survey recently that showed how Puppet is just probably over 80% more used than any other provisioning configuration management solution out there. So we wanted to make sure that we were allowing these existing investments to be used. We didn't want to force people to only use a very specific thing for the pads. They should mesh and blend well with the rest of your operations across your data center. And so people have grabbed onto that. They like how we run on AWS. They like how we run on Google Compute. They like how we run in the data centers. So we're really able to offer a more high-bred solution. Not to be misplaced. We also have just an amazing thing happening for the on-premise customers. On-premise customers have the benefit of our OpenShift.com platform. And for those of you who don't know about OpenShift.com, this is our public tabs offering that anybody can log into even without a credit card and get three free gears. Then, very quickly, they can go to our bronze plan, just provide a credit card, and then they can consume larger gear profiles and offer up more and more regions that are more globally spanning. Going into our silver plan, which offers support. But if you can imagine, then I sit back for a minute and think, what if I build a platform? I was a major IT vendor. People respected me. The best and brightest came to my platform daily. And I offered every man, woman, and child on the face of planet Earth three free gears. What sort of abuse would be placed onto the platform? What sort of innovation, creative ideas? Just an amazing amount of resource pouring into the platform. Well, we're running that platform. That's happening today. That's been happening for over a year, most two years. So with every cut of the on-premise solution that we sell out to you called OpenShift Enterprise, it has that innovation. It has that lessons learned, based into the commands, and baked into the user experiences. So you benefit from the 13-year-old in China who is posting out Node.js. You benefit from the 7-year-old in California who's getting into PHP, and he wants his cartridge file system to look a little bit differently. All that preference, if you will, is being baked into the enterprise. And these industry analysts really like that. They like the fact that we have this flow between our online and our enterprise solution. So you don't have to go alone, right? Red Hat has a great professional services branch, and we have a great training branch. Our professional services just released a new DevOps experience. So if you're new to get, if you're new to Jenkins, you want us to come in and look at what you're doing and see how our platform can integrate and accelerate your path to innovation, we definitely have those services available to you. Training classes, there's three main training classes out there, there's a certification program out there. So there's a lot of ways to use the rest of the company. It's a great experience. Most customers who partake in consulting and training definitely have a good experience and a good feedback on that. So that's the end of our lecture and our demonstration. It's been a great path with Paychecks. Like I said before, it's an anomaly for us in that they are a massive company that is both financial services and mission critical and both human resources focused, which means they're into that user experience and that innovation path. So it's a great mix. And to have our technologies in play there, it's been a great experience and learning experience for us. So a lot of innovation gets pushed back into the rest of the community. We definitely want to partner with the rest of our customers. So if you're out there and you have a story to tell, please contact us and we can get you up there and talk to the rest of the community as well. Now the next series I mentioned at the beginning that this is a longer series of dev ops sessions is more so about OpenShift and standing up OpenShift by itself with its inherent get-in Jenkins and how that connects to a dev ops agile development process. This was more focused on integrating OpenShift into an existing one. So you can definitely take it in both directions. We're gonna open the floor up to questions here. There's been a number of them that came in over the course of the speaking. Let me just start here from the top. So here's a question here. Do you find that you are integrating OpenShift a lot with existing continuous integration platforms or are you laying down new ones more? That's an interesting question. So it's pretty, it's almost 6040. There's a lot of customers out there that have, I mean, they definitely are capturing code. They're definitely considering the developers work very important. They're moving it off their laptops. It's flowing in one direction. They can map bugs and line of code control over to that. Where they aren't necessarily automating is connecting that push out to the platform. Typically there's more of a, either a manual process involved there or a very large and significant process. And they're not necessarily willing to push a lot of change to production because they don't have the risk mitigation in the platform earlier around making sure that the steps they're doing are very repeatable. And so it's about a 6040 split. And we talk more and more with these customers and we get them to understand how you can mitigate that change control and how you can definitely take the benefits of having a corporate standard and mix it with the platform to really achieve a higher level of innovation. And that's what it's all about. So I would say about a 6040 split. So this new question here. This one sounds good for you, Erin. So how important was it to have your upper management buy in and support this new platform project? Thanks, Mike. Yeah, I think this, our OpenShift implementation started, in fact, with a group of individuals in the Unix engineering team doing some 20% engineering work that we were, they were doing on their own and they started getting into OpenShift and seeing what they could do, built our first on-prem implementation. And as it grew over a few months, we saw a specific use case where we could support an effort targeted around 12C web logic. So it fit in nicely with empowering the developers to have a location that they could go and they could self-service and that they could do their development work. So the upper management buy-in came a little bit out of order. It was really a couple engineers that really wanted to dig into OpenShift. Once we saw that the platform and dug a little bit more into it and saw what it could do, we did definitely have some more support and more encouragement to move this forward and stage the environment for the developers to use. So in this case, it was a little different than an upper management directive. I think the engineers themselves saw the value as they continued to play with the environment and that's how it came to be. Great. And there's another question for you that just came in. When you looked at that flow of Git, the Jenkins, the deploy it in Stash and OpenShift, what part of that whole thing did you save the most time moving to this new platform? Where was the most time saved? Well, I mean, ITOPS resources not having to stand up new infrastructure and maintain that infrastructure and all the operational impacts that it takes to be able to deliver when new requests come in for environments certainly was a big time saver. The integration with the RTC Jenkins and Excel deploy allowing the developers to move at their own pace at any hours when they need a new environment for testing. Saved considerable time. Again, from an operational standpoint, being able to solve service and spin up and spin down, this environment for the testing was probably the biggest savings. Awesome. This is a funny one here and there's about three of the same question here. Can I have your web logic cartridge? Yeah, we expected that one. We are working right now to finalize getting that available and getting it up on GitHub. So I would expect that within the next month it should be available on the Red Hat origin site index. We're just finalizing some of that code and those scripts and just kind of buttoning it up a bit but I would expect that within the next month it should be available on the Red Hat origin site index. Great. So I'm saying some questions about Docker. To reiterate, we have an existing container. We call it a gear. We're innovating that gear to start using Docker in the first half of next calendar year. So it's 2015 is when we're gonna be introducing that Docker standard. If you checked out our blogs on openshift.com slash blogs you'll see a lot of talk about our 3.0 release and this is the Docker release. We're actually doing a step further and we're commoditizing our brokerage level as well. So we've partnered with Google and we've invested in their Kubernetes project. So we're bringing Kubernetes into the platform. We're having a pluggable scheduler. So you can, whatever scheduler that you're interested in plugging into Kubernetes be it Yarn or Mesos, that's gonna be possible. And then executing on Docker at the container level will add just so much more content and ability to the platform. So those are the building blocks that we're working with. Now our goal here is the OpenShift engineering team is to bring the OpenShift qualities to that next generation platform. So that's all about what we talked about today that developer experiences and the services and how those applications behave and how they can grow and idle. Getting into getting the Jenkins of those front end concepts preserving all that goodness that we have around OpenShift today on these new technology underpinnings. So that's the next platform coming out and that answers that one. Let's see what else we have here in the, the presentation will be available after today. If you registered for the webinar you'll get this email and in that email it'll have the link to the presentation. After today also the presentation even the recording of it will be on demand. So you can go to OpenShift.com and towards the bottom you'll see a list of all of our webinars and you can watch any of them at that time if you want. Question about some of the services that we run. You know, OpenShift is unique in this area when we look at our competitors because we really allow you to mix concepts here. You're not forced to run things that you don't necessarily wanna run off of the platform or on the platform. That's your choice architecturally. So you see us with MySQL and Postgres and MongoDB cartridges that are run natively on the past platform or you can decide to run those off of the platform and put them in super clusters dedicated to maybe a different organization within your company deals with persistent storage and databases. You know, whatever architecture works the best for you is what's capable in that platform. We partnered for example with Crunchy Software is doing a Postgres cluster on the past platform. So you can bring a lot of sophistication to these cartridges. A perfect example was today with Paychecks running WebLogic in a cartridge with Spring on top of it. So a lot of extensibility in that area. Looking at the rest of the questions here. So some of the competitive differences that we find when we look at our past compared to some of the others in the industry, we are focused on a lot of open source technologies. So we support those. We, like I said before, offer you CVE and Arata fixes for these low level technology building blocks. We also support Java EE. This is a huge and significant difference between us and a lot of our competitors where we have a great feature set built around Java EE. We have much better in a sort of a deeper user experience through the IDE. We work a lot with our JBoss studio team here. I meet with them every other week. We talk about how we can add more and more features into that. We have a large variety of partners that are very open to wanting to bring the platform to their customers. So if you're into big data, partners like Hortonworks offering an ability to have cartridges call into what they may have out on their big data platform in a high cluster. So a lot of doors opening that were closed previously to pass. And it's through the innovation and the thought leadership of the OpenShift platform that we're able to bring a lot of these competitive advantages to you. You know, when we look at an operating system as a building block, the amount of work that we do with our rel and kernel team and getting into page swap fixes, memory and CPU contention things, a little nuance is that you only see when you run a highly consolidated workloads on a platform. Having OpenShift.com has a public pass that's available. We're offering three free gears to literally every man, woman and child on the face of this earth. And what sort of abuse that brings into us and what sort of creativity and what sort of innovation that brings into us, we're able to capture a lot of that and bring that back to you in the on-premise product. So these are all things that none of our competitors are really at the same level as us. And that's why we have the traits and the characteristics that give us that cutting edge. So I'm looking through the questions. The rest seem to be about ROI. So out on OpenShift.com, we have a great ROI calculator. You can go in there and I believe it's in the lower right-hand corner of that webpage. It'll ask you some questions about your environment, how many developers you have, how many code projects they kick off, what technology are you using, and it'll spit out some ROI estimates that you can grab and bring to your upper management. All right. So I think that concludes our webcast today. I want to thank Erin for spending the time with us today. It's been a great partnership with Paychex. It's been one that I look forward to across all my customers. Paychex definitely understands their business and understands how to make their customers happy. So we look forward to partnering with other customers in the way we have Paychex and I think we'll close it out.