 Hi, everyone, and welcome to the webinar. Cisco Accelerates App Development with OpenShift, Red Hat Paths. Before we get started with today's presentation, there are a few items to quickly mention. You should see a taskbar at the bottom of your screen. Each icon is assigned to a particular element of today's webinar. If you're not sure what the icon does, hover over the icon with your mouse and a box will appear to tell you the function. Also, below the slides window, you see a blank Ask a Question box that allows you to type the question. After you type the question, click Submit to send it to our presenter. Feel free to submit your questions throughout the webinar and our presenter will address as many as possible following the presentation. You can submit any technical questions related to the webinar platform here. Please close down other browser windows or applications that might be splitting a bandwidth, including VPNs, as these might interfere with the audio or video stream. If you experience any connectivity issues, please refresh your browser. Today's session is being recorded and all registrants will receive an email within one to two days of the event, with a link to view this presentation on demand. So now I'm going to hand it over to today's speakers, Dan and Sandeep. Okay, thank you, Nick, and thanks everybody for joining our webinar today. We have an exciting guest speaker today. We have Sandeep Puri, engineering architect with Cisco. And Sandeep is going to talk about how Cisco has adopted Red Hat's OpenShift Enterprise Platform as a Service Operating and they're using it internally within Cisco to accelerate application development. My name is Dan Junkst and I lead the marketing efforts for the OpenShift business unit within Red Hat. And so I will be service moderator today. But the bulk of the conversation will be from Sandeep and we'll hear directly about how they're leveraging OpenShift. Cisco has added the OpenShift technology to its mix of technology to really provide a rapid mechanism for providing web scale applications and frameworks. They've adopted platform as a service because of some of the key characteristics that it brings with it. For example, it provides a great focus for the developer experience, giving the developers self-service access to the technologies they need. It's got a multi-tenant container technology within it for super high efficiency. And it's got robust security based on the Red Hat Enterprise Linux platform that it's built on. And it's allowing Cisco to really move to a DevOps model without really impacting the business in any way but rather accelerating the business. So in this session, Sandeep's going to talk about some of the environments that they had, the challenges that they faced and that they wanted to solve with PATH and then about how OpenShift aligns with their needs and how it fits into their architecture and some of the key lessons learned. So Sandeep, thanks for joining us today and welcome to the webinar. At this point, I'll turn it over to you and you can take us through the conversation. Thanks, Dan. My name is Sandeep Pari. I'm an engineering architect at Cisco. I will be talking through our PATH efforts and how we got to where we are today with OpenShift. The talk is actually partitioned into two halves, sort of. The first half, we'll be talking about who we are, where PATH fits into our journey, the architectural tenants around how we intended to do PATH and at the end, how we ended up choosing OpenShift for some of the PATH's capabilities that we have and then briefly talk about some of the takeaways that we have. So Cisco IT infrastructure services is responsible for creating and maintaining the stack where applications are developed at Cisco. Currently, CITES is acronym for Cisco IT, infrastructure services. They're responsible for the infrastructure as a service components inside Cisco IT as well as the platform as a service components inside IT. And we started our journey with infrastructure as a service and moved on to platform as a service. And CITES is specifically focused on delivering application, development environments and infrastructure services in a cloud delivery model. So, a brief overview of the components that CITES are subscribed to, infrastructure services, comprises of the front end that actually manages the ordering of services that you can order from Cisco IT. It's a self-service catalog using Cisco's Prime Service Catalog. There's an orchestrator element that actually talks to different, what we call resource managers and the resource managers could be anything from your phone, provisioning systems to your cloud management systems and so on. And in this context, the PATH's management systems like OpenShift become one of the resource managers and all of these are residing on Cisco's hardware, so to speak. So, Cisco IT and CITES is responsible for providing environments that support more than 5,000 developers. Currently today, we have an existing PATH-like platform with more than 30,000 JVM instances, all hosting applications developed internally by Cisco IT, ranging from custom applications to doing HR functions or any of the other IT functions, all the way to package the ERP applications that reside on these infrastructures. These environments or infrastructures also support all the deployments and environments, life cycles and environments to support these applications including dev, test, stage and prod. This environment or these sets of environments are responsible for more than $30 billion worth of transactions flowing through and they have to be available 24 by seven globally and supported globally. So, just a brief overview of what Cisco IT provides. Now, moving on to our PATH journey, so I wanted to briefly talk about the kind of clients that Cisco IT typically deals with on the IT infrastructure side. The first, on the third set of clients, I'll start from client number three. The third set of clients are folks who actually just need the IaaS level services where they say, give me VM storage and network and that's all you need to provide. If it's a list for, I will build things that I need on top of it. So, sort of the cloud IaaS models that you have in the public cloud. The second set of clients, client number two, actually say, give me the VM and IaaS level resources but provide me PATH or managed resources if you can and I'll mix the two workloads and provide my SaaS services. And the first set of clients actually say, I don't really need to worry about VMs and storage and network and so on. You manage that, you provide a higher level service like a managed application environment. I'll just consume that and create my applications that way. And we have to cater to all three types. As I mentioned, we started from the client number three bucket and as we move on higher up the stack, we're moving towards the client number one bucket and we believe if we can enable our developers to focus on the development side of things, most of our client needs will be fulfilled by client number one and that's where the PATH efforts come in. So with that in mind, we actually started down what do we want from our PATH environments? And we call our PATH environment lightweight application environment. Again, we already have a limited PATH-like environment today and it's been there for seven plus years. Limited in a sense, it has some cloud characteristics but the scale is pretty large as I was saying 30,000 VMs but we wanted to expand the cloud-like capabilities of that environment. And these are some of the high level criteria that we wanted from that environment. So the first one was moving from a limited or a restricted set of choices for developers to use to a flexibility of choices. So every language runtime that we built in our previous environment, it takes anywhere from six to nine months to kind of bring in that capability and make it GA, right? So it's very expensive. It was very expensive for us to create a new environment for every runtime. So we wanted an environment that actually provided adding capabilities easily out of the box. The second one was moving from closed-source providers of the software to an open-source ecosystem as much as possible and to us, it was a no-brainer. We've been using open-source quite a bit. There's a mix of open-source and commercial software in our stack today. We're just trying to move more and more towards the open-source side. The third aspect was the provisioning leap times should be as small as possible so that you can actually move towards a self-serve model where clients actually come in and just click a few buttons, provide the information that we require to give them an environment and everything is automated as much as possible. So that's the insight that clients themselves help themselves, right? So they shouldn't be sending requests to queues and somebody steps in and fulfills the provisioning request. The fourth one is something that I think all enterprises struggle with. We wanted an environment that at least provided a framework for full application lifecycle management. What I mean by that is a developer starting from the initial provisioning of an application to deploying an application and before deploying, developing, testing and going through a QA cycle and then actually going live production. All of that, there should be either hooks provided by the framework to help enable a one-stop shop for all of those actions or provide a solution for the entire lifecycle itself. And the last piece that we were looking for is the framework that we choose for the past environments should allow us to scale from hundreds of applications to tens of thousands of applications. It doesn't necessarily mean that Cisco will be hosting tens of thousands of applications. It means that if we ever need to, we should need to go scramble and figure out a different environment for that. And the same frameworks should allow for large memory footprint applications as well as microservices. And microservices and micro applications are the types of applications that will probably increase the number of applications to tens of thousands. So with that framework in mind, we actually came up with a set of criteria that we used to choose what framework we should be implementing at Cisco. I won't talk over all of the blocks. I will talk however about the section. So at the bottom, you'll see things like open source, polyglot, on-prem builder and so on. So these were the criteria buckets that we used to kind of look at different types of past frameworks out there, whether they were fully managed or build your own past kind of things. And we decided, for example, OpenShift based on these criteria. So open source, I already talked about there's benefits to it and so on. Polyglot was a big one for us to wear. It's not just the languages, the different kinds of languages, the frameworks you support, but also application frameworks, database engines. And it should be, I as agnostic, it should be able to run on any kind of hires that you have, whether it's a public cloud or a private cloud. On-prem and off-prem were complementary to each other, at least for us, in the sense, we should be able to host our own past environment using the framework that we choose, but also if ever required, whether it's for business liability reasons or for burst cloud bursting, we should be able to burst out, for example, to an off-prem or a cloud-hosted provider without having to change our code bits, right? And the middle pieces here, builder, provider, subscriber are sort of roles that we think we should have in a cloud-like environment. We typically care about only the provider and the subscriber. The provider provides the resources and subscriber use them. There's a third category called a builder, where the builder actually adds capabilities that the provider can take and provide as services, right? So that builder category is fairly new, but it's crucial that we kind of separate that out. The provider is not always the one that builds the capabilities. And so with that role and the other characteristics that we had in mind, it seemed to us that OpenShift, even though it did not have all of the blocks in all of these categories, that was the direction it was moving in. So in essence, it filled the mold that we had in mind for a cloud-based PAS environment fairly well. And that's why we ended up choosing OpenShift to kickstart our PAS. Now, some of the things that we wanted to ensure while building out our PAS environment or light-with-application environment is what I'm gonna be talking about. There's three or four or five architectural tenants and aspirations, and I'll be walking through them now. So the first one was cities, as I talked about, created this data-centered transformation trend in Cisco, and you'll see this probably in other IT shops as well, where using a virtualization and automation actually brought the TCO of a host down quite a bit, and the entry and provisioning times down quite a bit, and the virtualization rate went up, and this trend has been continuing for a while now, and this is a public slide from our Global Infrastructure Services in Cisco IT. The intent here was not to stop the trend, so when we do PAS, continue the trend that I has had and try to bring down the TCO for running an application even further down, and the entry and provisioning, instead of days, it should be minutes, and so that was the architectural tenant we started with. The second aspect of it was around the notion of a resource model that application developers can work with, and what we mean by a resource model is different application teams may have different requirements for life cycles of applications. For a regional availability of applications, they may need to have applications hosted in one data center for one app and maybe three different data centers for another app, so they should be able to define their own life cycles, define how many, how much resources they will be consuming from LAE, and also defining their resiliency posture that way, so whether it's regional or global, and also provide application, continuous application build and deployment. Some applications may want to use tools like Jenkins and so on, some application teams may have their own build systems, but LAE should be able to provide an environment that caters to almost all of these aspects. The third aspect was around the notion of containers for applications. Up to now, we've been okay, our industry has been okay with using virtual machines as sort of a de facto portable container for applications, and it provides a few things that you see in the slide here. What we wanted to get to was, in order to get to that, the same framework provides a cheap and fairly efficient way to host hundreds of applications and scaling all the way up to hundreds of thousands, we needed to get to another stage where VMs by themselves were not enough. You needed to use containers, and most of our applications are typically either written in languages like Java, which sort of are agnostic of the OS, or in things like Ruby, PSP, which can work similarly across different OSS, right? And most of our OSS seem to be Linux, and containers kind of fit that bill where you could have efficiency but also have a way to package the application sort of like a VM, but a much smaller and much more efficient. So that's, we thought that this is the route we should be taking as far as building the platforms go. The last aspect of it was, it's not just the application hosting environment that application developers care about, it's actually the entire process of how you deploy, develop, and application. And don't pay too much attention to the different icons in this slide. What this is trying to show is that when you provide an application, a development, and runtime environment, you have to cater to all six aspects or more aspects of application development, right? So it's starting from planning to developing to source code management, to continuous building, automated testing, deployment, and release, and then adapting and scaling. So even though in this aspect, OpenShift, for example, is at the tail end of it where you can actually host your development staging and production. OpenShift is where applications actually decide, but there's an ecosystem of tools that we'll have to integrate with OpenShift, or RISE-versa, OpenShift will have to integrate with an ecosystem of these tools, right? And this is a snapshot of one type of application lifecycle management that certain groups may be able to use. So in this case, a development environment in OpenShift will be used to develop applications. You can use Git and Commit and so on to push your code and a Jenkins build server will pick it up and run the test through. After it passes the test, it releases the binaries itself or the fully tested source code to an artifact repository, which will then deploy the code to a production environment using the development or the deployment pipelines like Urban Code. And in this case, it'll be using the binary deployment pieces of OpenShift. So this is just to show that it's not just your development and runtime environment, you have to think about, or we have to think about the larger ecosystem of tools. So with that, this actually brings the first half of the presentation to a close. Now I'll switch gears to sort of the technical architecture of LAE and what we came up with for our first release and then talk about some of the roadmap items after that. So in summary, some of the integration, enterprise integration that we did with OpenShift at Cisco was an integrated ordering experience. So Cisco actually has a product that Cisco IT uses called Prime Service Cala, which is a front end for all services that Cisco IT's clients use. So you could be ordering IaaS cloud resources, you could be ordering your phones, you could be ordering anything that typically IT person or IT client would order from. And this is a single front end that captures the order and uses orchestration tools to call the right resources at the end. So we had to make sure that we integrate with that. We had to make sure that there was single sign on, or enterprise single sign on between our portals and the OpenShift system. We had to ensure even though in a cloud model we wanna get to a security posture where the application itself defines what the security posture should be, there's still a bit of, I would say maybe legacy architectures around internal and external zones. And we had to be that support in to ease the migration of current applications that are residing in other environments. And so we had to make sure OpenShift, our deployment OpenShift at least had that notion built in, right? We're a pretty big ERP shop in a sense. There's enterprise databases that we have. And we had to ensure that there were drivers and so on available in this case for Oracle from OpenShift so that applications developed in OpenShift automatically have that driver available. The other aspect of applications development and runtime management of applications is around the analytics of things that are spewed out by applications. So logging from the application itself or logging from the framework that supports the application. So we use Splunk to aid in our analytics. And it's not just application code, it could be data from the network devices to the compute devices. So all of those are available in Splunk and their new environments are being added to Splunk. We had to make sure that OpenShift was integrated well with Splunk as well so that we could have a holistic view of, at a given moment in time, we could have a holistic view of what is going on with a particular flow, for example, for an application. The second last piece is we have a pretty big enterprise integration environment using Web Services Gateway and typical ESB services. We had to ensure that OpenShift worked well with that integration box. And then the last piece, as I talked about earlier, was how do you deliver a code to your dev environment, from your dev to your state, to your QA, and finally to production, and maybe in multiple data centers. So we had to make sure we had that integration in there. So this is sort of a very high-level architecture diagram of LAE, our lightweight application environment. What this is showing is OpenShift in the box in the middle with a surround by red. And then all of the other pieces that we added to the mix to make sure that it met our needs. So we added our own reverse proxy and load balancing on top of what OpenShift provides. And we have our global site selector set up so that proximity-based direction of, DNS-based direction of requests would get to the right data centers and so on. We integrated a single sign-on with OpenShift. In this case, we had to work with the passes at front end, both the brokers for OpenShift, as well as the passes that are on the individual nodes and ensure that the single sign-on modules were installed and working well. And the next piece was the deployment. Again, I've talked about this a little bit. We wanted to ensure that our current tools that we have and the new set of tools that are coming up in the environment are able to integrate with OpenShift either via command line or via API to be able to deploy bits to the OpenShift system. And then all of the other pieces that I talked about, the enterprise databases, the log analytics using Splunk, in this case. The eStore integration, which is the front end, the single front end for everything that ID does, and the enterprise messaging and the web service gateway. So this is a very high level architecture and we won't be going into the details around how each of these were done, but this gives you a snapshot of what we're doing. Some of the time that we saved with OpenShift Enterprise was the ability to leverage the VM updating mechanisms for both the framework itself and the content provided on it. The use of REST APIs so that we could easily integrate with our orchestration tools and our portal in the front end so instead of having to make a self-script and so on that we log on to different hosts. The current specification was fairly open, although we've had some challenges having our developers actually go out and build cartridges, not specifically with the cartridge specification itself, but the process that we had to create for them and we're still working on how to open that up for our developer community. The OSD architecture actually worked well with ours taught around multi-tenancy and security around multi-tenancy and how it integrates with the larger ecosystem. What we struggled in our previous, our larger PAS environments that I talked about with those 30,000 AVMs was how do you manage idling and scale and OpenShift provided an easy way to manage the idling of idle applications and scaling and using resources more efficiently. And then again, out of the box understanding of get and how to use Jenkins. So with that, I'll briefly go through some of the integration and there's some screenshots that we have. So what you see here is eStore, Cisco ID implementation of Cisco Prime Services catalog where a user will log in and will see a single point of entry to order anything that they want in IT. And this particular snapshot that you see is for ordering a lightweight application environment and then after you order the environment ordering the applications that you may want to develop. And the second screen on this actually shows you what the screen looks like to order an application in that environment. So there's devs, stays, life cycles, the size of the application containers that you want and so on. The second screen actually talks about the single plan on experience that eStore or Cisco Prime Services catalog provides. As you can see, it's not just cloud based services that you can order. You can order pretty much anything that IT provides and it was crucial for us that this integrated well with the PADS framework and the API that Openshift provided made it fairly easy for us to do that. The next slide actually talks about the Splunk integration. So we have a fairly large Splunk installation where the providers, in this case Cisco IT infrastructure services as well as the application developers both use the platform for their specific needs. A provider may want to look at logs from network devices to operating system logs to database logs and so on. And the application developer may want to look at their logs that the applications result. And we specifically had to create certain definitions for that work well with Openshift. And I believe those add-ons are being added to the Splunk community site. I'll have to check, but. So Ruby on Rails for example or JBoss application server or whatever you may deploy in Openshift, their work category is created and we actually looked at how Openshift provides a UID for each of those gears in Openshift parlance so that application teams when they come in, they don't know about these UIDs and gear IDs, they know about the application name that they created via a print service catalog. So how do you map that so that when they log into Splunk, they see only the relevant UID logs. And so there's quite a bit of work done and it was pretty impressive to see these logs being available in Splunk, but not just being available, you could actually start drilling down into a specific problem area for an application. And because of the correlation that was built in into the integration with Splunk, you could actually go all the way up to your web server logs for example and correlate that with the application logs using timestamps and other correlation fields. So those are some of the pieces around the integration that we did. Now keep in mind that when we actually did the first Openshift integration, this was with Openshift 1.2 and we're trying to move to 2.1 fairly soon. What you see here on the roadmap was part of the roadmap for us at Cisco to provide this capabilities. Now with 2.1 and Openshift, a lot of these things are already there. We just have to make sure that it matches with our expectations of for example, availability zones, the regional data timers that we have, how do you, and so on. So 2.1 provides a lot of these. We just have to make sure that it matches. The other three pieces that we're looking at is how do you use the migration of applications from our legacy platforms to Openshift itself. A lot of our applications as I mentioned are J2E applications and how do you make sure that we have a fairly easy low resistance path to application teams to migrate from these platforms to Openshift which will give them a lot more benefits. As we build out multiple locations for Openshift deployment, we're leveraging Puppet and we wanna leverage Puppet for not just Openshift bits but prepping up the underlying IAS stack so that with one command or one click of a button you can have a new zone or a new reason stood up for Openshift. Openshift heat integration, we're moving quite a bit of our workloads from a VM based environment on VMware and ESX to Openshift and KVM and as we move to create Openshift based IAS we wanna ensure that we can leverage the heat integration. So at this point, both the Puppet automation and the heat integration will work hand in hand. Heat will provide orchestration pieces for Openshift and heat will in this case be calling Puppet scripts behind the scenes. The custom cartridges as I mentioned there's not been a lot of uptake in Cisco IT at least to create custom cartridges and we need to and there's probably quite a few factors around that including from our side proper documentation and environments to enable creating of custom cartridges. So that's on our roadmap to kind of ease that path as well and the last piece is that every again as I mentioned every application team probably has or a lot of sets of application teams probably have their own release pipelines and we wanna be able to create a system where application teams can define their release pipelines. So some teams may have five or four or five different environments before it gets to production and they may have gates from one environment one life cycle to another. We should be able to provide that as clickable or orderable things directly from Easter and we're sort of somewhat there. People can create multiple life cycles foreign applications already in our current GA environment but we wanted to add the ability to have gates and so on so that unless you pass a gate the application doesn't move from one life cycle to another and the gate could be a variety of things like code reviews and so on, right? So this is sort of the roadmap for us in the next year or so. One thing that I wanted to touch upon was OpenShift provides quite a bit of capabilities in OpenShift Enterprise as part of our cartridges that they support but there's also a lot of cartridges that are available in the community side. So we had to kind of come up with a governance model for how we create an environment where people can introduce new capabilities. So we came up with this self managed community managed and IT support managed environment where if it's out of the box from for example OpenShift Enterprise it's gonna be supported mostly on all three but if it's not out of the box with OpenShift it might be that a particular team wants to use the cartridges. We make that capability available in our express environment which is sort of a low SLA environment and they can manage that cartridges themselves. If there's enough interest from different teams to use that it moves on to a community support model and there's specific criteria to get to that and if there's a business reason for it to be supported by IT or we feel IT feels that there's a valid reason for supporting the cartridges then it moves on to an IT managed environment so that IT itself provides with working with Red Hat in this case provides SLA's around support and so on. So that was a model that we had to introduce in order to not fully open up the floodgates and not know how to do support there on. So I'm almost at the end of the presentation that I had. One thing I wanted to show was the adoption metrics and keep in mind this metrics that I'm showing is actually a couple of months old. It's gone up quite a bit. So this was after around the March timeframe we released the environment in I believe end of January or February so this was sort of adoption without us having to go after different application teams. As you can see there's quite a bit of interest in PHP and MySQL and PHP wasn't an IT supported language runtime before LAE. Node.js wasn't one either. Neither was Ruby or MongoDB. So you can see that there's a lot of interest from application developers to use these technologies and the numbers probably have increased quite a bit. I wasn't able to pull up numbers in time for the webinar but this shows that there is adoption that's going on without us having to go after application teams. And the last slide here that I have is not some takeaways but some notes. So one thing and some of these we've provided as feedback to the OpenShift community and Red Hat. So availability is a big deal for a lot of our clients. Availability should be all the way down from your routing layer, sorry, your web routing layer to your load balancing to your application containers to your databases and even to the IS layers. So we need to take that into account when we build OpenShift and OpenShift 2.1 and then upcoming 2.2 has gone a long way to kind of provide that. Routing and network security are not things that OpenShift provides and it isn't meant to. So the IS layers provides network level security and you need to make sure that you think of those things, network zones and so on. Application life cycle management. OpenShift provides hooks to enable ALM. It'd be nice to if OpenShift provided ALM capability itself but maybe it's not the right path that OpenShift will take. There are other vendors and other open source solutions that actually provide full ALM. Just using OpenShift by itself will probably not work for enterprises, you'll have to think about ALM. The other piece which is going back to building cartridges is we need to be able to provide OpenShift in a box like a VM and I think there's something that was released by the OpenShift team but we need to have some uptake on that and provide that as a capability to our developers. There are add-on cartridges that are non-scalable in OpenShift so your database cartridges are an example. The community will probably need to think about creating scalable cartridges for add-on cartridges like databases. Region awareness which is already there in 2.1. Events, there should be an ecosystem of our framework to probably all kinds of platform events, not just start, stop and we should be able to tap into that event bus and I think there's progress in that area in OpenShift. Logging is another area that we did our own integration with Splunk but the new releases of OpenShift make a single sync available where it's easier to integrate. And again back to custom cartridges, the downloadable cartridges help here. We're still struggling with the utility of custom cartridges in a typical enterprise setting. There's pockets of areas where custom cartridges are useful for developers directly but it's mostly the builders as I was mentioning and the builders as opposed to the clients who use these cartridges are fairly few and far between so we'll have to figure out how to use the custom cartridges capabilities. So with that I will end my presentation. So one other thing that I want to talk about before I hand it off to Dan was so the integration that we did in this criteria got transferred, a lot of that, those pieces got transferred to our systems development unit in our development organization which creates validated designs for integrated systems and there's a CVD design out from SDU which integrates Red Hat OpenShift with Cisco Prime Service Catalog and deploys it on Cisco hardware with OpenStack in this case Red Hat OpenStack and that's downloadable from the Cisco.com website for people to view. Dan, do you want to take it from here? Okay. Yeah, great, thank you, Sandeep. That was a great overview of what Cisco is doing with OpenShift, really appreciate you going into all the detail, if for folks on the webinar if you want to learn more about OpenShift you can certainly go to OpenShift.com, there's information about OpenShift Enterprise, our private Enterprise Paz as well as OpenShift Online, our public hosted Paz offering as well as OpenShift Origin which is the open source upstream project that's the foundation for all of OpenShift. So I've got a couple more slides to talk to and then we'll take some Q and A times. So if you have questions for Sandeep or for me feel free to put them into the Q and A box on the screen that you can see. I did want to touch on a couple additional things in terms of making the most of OpenShift leveraging Red Hat consulting and training. So one of the things that customers like Cisco have done is not just purchase the OpenShift software but also taking advantage of our expertise that Red Hat provides around Paz and around DevOps in terms of implementing the OpenShift architecture. So there's several different consulting offerings that we have in terms of just a simple introductory getting started with OpenShift, more of an enterprise architecture service as well as advanced services that can get into really designing your Paz environment and building out your application lifecycle management environment. In addition we have training and certifications for OpenShift so we have a number of different classes that are available to help you learn how to operate and utilize the OpenShift platform within your organization. And in fact our training organization is 15 years old and so we are celebrating that with some discounts as you can see on the screen here. So discounts say 15% on courses for OpenShift and that would be a great way to get you jump started on your OpenShift implementations. So with that I think we want to thank you. Again I thank Sandeep for presenting and we can take a look at some questions now. So as I said before if you have questions please feel free to put them into the Q&A box. There's one here initially that's come in already Sandeep and I'll throw it over to you. What has been the developer reaction to OpenShift within Cisco and the reaction to the LAE environment? So developers are actually excited about a platform that supports multiple language runtimes. Previously they had to either figure out with IT the support model for it or build it on their own and support it themselves. Today OpenShift or LAE in this case provides multiple runtimes and they're really excited about that. And the second piece they're excited about is the ability to plug in continuous integration into their workflows. Okay, cool. And you touched on this briefly but what was some of the kind of key decision criteria that you had that allowed you to select OpenShift? Why did you choose OpenShift as your past platform instead of some of the other offerings that are out there? Sure and I'll briefly switch back to a slide. It's available there in slide 11. I talk about the seven different criteria buckets and the different criteria inside the buckets. OpenShift seemed to fit those criteria really well, open source, polyglot and a few other things. And yeah, there you go. And again, it was not just that OpenShift fit the criteria, it was also the roadmap that we saw from OpenShift around where they're going with this and it fit our general direction really well. Okay, all right, let me look at there's more questions coming in here. How long did it take for Cisco to roll out the paths from concept to roll out? We actually started in, I guess late or mid 2012 with, and this was not specific to OpenShift, this was how do we provide the next generation of application environments to our clientele. So started around 2012 and then went through the specifications and what we needed, the requirements and so on. And then choosing different products to work with, settling on a product. So it took about a year and a half to get to an express environment using OpenShift and then maybe about almost two years to get to a fully supported GA environment. But that's not just for paths, that's the entire life cycle of how we wanna provide applications. Yep, that was it. Okay, here's one that I'll take. In the case of Java apps, there's OpenShift support frameworks, Spring, Hibernate, et cetera. And then maybe partly for you too, Sandy, did you face this at Cisco? And the answer is yes, OpenShift does support Spring and Hibernate, essentially anything that runs in Java will be able to run within OpenShift. So the Spring libraries are supported within the J-Ball Center for Application Platform as well as other options within just plain Java within OpenShift. Did you guys work with Spring and Hibernate at Cisco? Yes, Spring and Hibernate and regular GE as well. It depends on application teams. We don't provide a specific guidance as to which one to use. Right, that's true. Yeah, OpenShift is one of the, not only open source, private paths, platforms for on-premise use that supports the full Java E6 container as well as the Spring libraries. Can you name a few of the custom cartridges that Cisco has developed for your OpenShift environment? Sure, so one of the larger deeply environments that we're working with is WebSphere as well as another environment is WebLogic. So we'll build our own paths around those environments and today we're looking at if we can provide a cartridge for the WebLogic pieces so that the start, stop, provisioning, et cetera is managed by OpenShift but we use the VT capabilities of WebLogic. That's one area. The other is something that we're working on to provide Java XMPP based cartridges for application development using XMPP. Again, back to what I was saying earlier, these are provided by the Cisco IT infrastructure services themselves, not by application developers and we need to enable the developers to build their own cartridges and that's on the roadmap. Okay, yeah, incidentally you've mentioned this in your talk but there are a number of cartridges that are up on in the open source community, cartridges for OpenShift. So they're up on GitHub within the kind of OpenShift area on GitHub and one of those is actually a cartridge that one of our other customers built for running WebLogic on top of OpenShift so that may get you a head start there as you guys look at the running WebLogic in the OpenShift environment. Here's another question, Sandeep, which other paths vendors did Cisco compare versus OpenShift? Oh, yeah, I didn't particularly put that on the slide but there were actually quite a few different products that we looked at both off-prem and on-prem. The on-prem, the off-prem solutions were things like Google App Engine, Heroku and you know, Amazon didn't start. Off-prem by itself did not fit our needs. The on-prem we actually looked at things like Cloud Foundry, OpenShift, Staccato, even Cloudify and what fit our needs mostly was again, nothing to do with OpenShift or Cloudify or anything like that, it just had to fit our thinking around where we're taking paths and OpenShift fit the bill as I talked about earlier. Okay, here's an interesting question. Can you clarify again, it's actually showing on the slide here the three roles that you identified, builder, provider and subscriber and what those roles are in your organization? Sure, so in the case of a lightweight application environment using OpenShift, the provider is just quite the global infrastructure or cities, right, so they build the environment and provide ordering and all of that stuff. Subscribers are the people who actually come in and order the services on LAE, so I need a Ruby card or Ruby application or whatever. The builder are the ones who provide additional capabilities to add onto LAE, so one example is cartridges that are needed. Those are built by a separate team of builders. In this case, the provider itself could be a builder but we are on the path of enabling the developers themselves to be builder roles and so we create an ecosystem of capabilities where the builder itself provides SLAs and the provider just hosts them. So cartridges are one example, the Splunk Integration is another example where a builder actually creates the Splunk Integration capability and provider takes that and says, okay, I like that and I will use that in my environment. And the provider itself cannot scale to do all of the building and that's why we're trying to separate out the roles so that some other person, some other group can do the building of the capabilities. Okay, if that makes... Yep, thank you. Here's another question that's come in. Does Cisco have a C++ or .NET environment that would need to be leveraged in a PAS environment? Cisco does have C++, not in our typical IT application portfolio, mostly on the engineering side. We also have .NET, not to the extent probably, I'll venture, I guess, maybe 5% of the applications and they're not typically IT supported in the sense that IT supports the infrastructure pieces, the windows runtime and so on, but the applications are supported by the applications in themselves. So not so much the need to support C++ and .NET in LA in the near term, but as LA expands out, it's a possibility. Okay, and here's another question. Can you say something about the Chargeback model used for your PAS and I guess one question, are you using the Chargeback model for your departments? We actually use currently the Schoolback model more than the Chargeback model, so there's a process that application teams go through to secure funds for whatever projects they're working on and as new resources are ordered from the eStore catalog, these are deducted from the allocated funds. So that's one way. The other way is there's a large pool of resources available for an entire org and we Schoolback at the end of the week, the quarter of the year, how much of the resources they're using. So as far as Chargeback and Schoolback, we're still on that journey around figuring out the best way to have accountability for costs. Okay, have you considered using centralized modules for various functions like logging, audit tracking, monitoring, tools, automation, business process, orchestration, rules, engine, et cetera? Certain pieces, yes. So logging, authentication, it certainly makes sense in an IT environment. Other pieces like business rules and engines may not make sense because not all rules and engines are built equally and the use cases for example, BRE is for different application teams may be different. So we're probably not gonna provide rules and engines and things like that centrally, unless, again, we'll speak to the clients if we see the clients come in and ask for the same set of capabilities, we'll evaluate that and look at providing that centrally. Okay, there's been several questions coming in around training and the training information is available on our website, either redhat.com or openshift.com. But we can also, I'll reach out to the folks individually after the webinar and provide you with links to the appropriate information. And here's one last question, I think we're almost at the top of the hour. Sandy, can you talk about the integration with the TIPCO service bus that you've done with OpenShift? Yeah, so when I mentioned TIPCO service bus, we have a fairly large installation of different pieces of the TIPCO ecosystem. So there's a messaging middleware using JMS-based infra. So that was the primary piece, applications using JMS, we call that environment MMX, messaging middleware. Applications being developed using OpenShift need to be able to talk to existing applications that leverage JMS-based queues and topics. So I should need to go download JMS libraries and make sure it works so that that should be available directly in my OpenShift runtime environment and that's the integration that we did. Okay, all right, good. Well, I think we're at the top of the hour. I appreciate everybody's attendance. We've had a great turnout today and lots of questions. We don't think we got to every question, but we will follow up via email for those that weren't answered live here on the webinar. The webinar will be available as a on-demand recording. You can access anytime. You're also, your colleagues can still register and view the on-demand recording after the fact as well. And we will also be providing another webinar in August. Let me get to the link for that. And you're welcome to attend, attend our next webinar and learn more about OpenShift and PAS and the real-world implementations that we're seeing customers do. So again, thank you for your time. We'll wrap up the webinar at this point and have a great day and week.