 our application development teams use our platforms to run and develop key applications that some of you might be familiar with as Comcast customers. So these platforms can include things like OpenStack or VMware and obviously Cloud Foundry. Just a quick note, next week for the OpenStack Summit we're also going to be present there so anybody who's attending that, we look forward to seeing you at that conference as well. So anyway, I sit on our cloud architecture team, so we provide strategic direction for cloud services and it was actually our team that made the decision to go with Cloud Foundry as opposed to some of the other past providers that exist out there. And I'd welcome any conversation about why we made that decision with any of you throughout the conference. What we're going to talk to you about is a challenge we came up with in supporting custom URLs for our customers. So I'll be talking to you about that in a little bit. Sergey is our application platform architect and what he does is he works with our development teams and they are leveraging proper architectures and design patterns that fit well within the cloud. He's aware of the 12-factor app and Sergey is the champion for that in our company. So he's going to talk to you a little bit about some of the custom service brokers that he wrote that provide a lot of value to our development teams. Sam and Neville are our cloud engineers and they work on our engineering team and they're going to talk to you about what it's like to take Cloud Foundry and run it within an engineering team and what kind of change in mindset that takes. So that will be pretty interesting as well. So first challenge I'm going to talk to you about is custom URLs. So this seems like a relatively easy problem but it added some complexity for us. Obviously Cloud Foundry supports custom domains within Cloud Foundry. It allows people to choose their own host name so that their URLs can be whatever they want to make it. However, when you add things like global availability, making sure that a single site can be hosted on multiple Cloud Foundry instances and then the URL hosted at a GSLB layer so it can be globally or geographically available can present some challenges. That URL, once it makes it down to Cloud Foundry, how does it route that traffic now that it's trying to handle a URL that is foreign to it and it has to be supported on both sites? And then how do you enable SSL for some situation like that? And then also how do you make it on demand? So the first thing I'm going to talk to you about is HTTP host header replacement. So basically when the URL makes it down to a local Cloud Foundry instance, we have our load balance layer do header replacements on the HTTP layer and what that will do is allow Cloud Foundry to understand where to route that traffic based on how our HA proxy layers translate one URL into a locally hosted URL. And that would enable GSLB support so that people can have a globally available URL that translates properly once it makes it down. And then multiple SSL certificates. So when you have multiple URLs that need SSL enablement, you're going to have a bunch of certificates and those certificates will need to be hosted on your HA proxy layer and there are going to be multiple certificates for a single HA proxy layer. So that presented some challenges for us as well. How do we get around that? So what we do is we leverage Puppet. So Puppet is responsible for making sure that the HTTP header replacements are properly injected into the HA proxy configs. And we put here in front of that so that the values are stored in a database. And what that enables for you is that you can put any Webster in front of that, any UI that you want to put in front of your here database and it will dynamically update the database and dynamically update Puppet and then update the HA proxy layer. And this works well for HTTP headers and we can make it on demand for our customers. And it also works with SSL certificates. So if our users need SSL certificates that are custom or specific to their application, they can do it through that service as well. And as long as your HA proxy layer supports SNI, you can support multiple certificates for a single IP that's hosted on your HA proxy layer. So that's the first challenge I wanted to talk to you about. And next I'm going to pass it off to Sergei who's going to talk to you about some of the really cool work he's doing with custom services and custom service brokers. Thank you, Tim. Hello, everybody. My name is Sergei Matashkin and I'm working on architecture team and mostly responsible for a layer between Cloud Foundry and our developers, our development community. Today I want to focus about one aspect of Cloud Foundry is managed services and managed services API interface. Cloud Foundry provide a great, very convenient way to create managed services like MongoDB, you can instantiate, RabbitMQ, you name it. So Cloud Foundry comes with these managed services. And managed services can be instantiated, can be created with just one command line or a few API calls. Our developers, when we start to release Cloud Foundry to our development community in Comcast, they immediately start to use it and they see value for development process because it gives them the freedom to start their backing services right away and use it and remove it and they don't need it. So it's completely self-service, they don't need to help from anybody. But with this attachment, with this likeness of a managed services, they start to come back to us and asking, is Kafka supported in the managed services? If something else is supported in the managed services? So we quickly realized that there is a good demand for managed services and we need to expand our library of managed services with something that we need to create on our own. First couple managed services that everybody asked and we feel that it's absolutely need to be created right away are Logger and Outbound Proxy. Logger, this is sort of obvious. Cloud Foundry has Log aggregator, but the actual consumers need to be able to store this application Log somewhere and be able to access and search them. And the second is ProxyLayer. ProxyLayer is required for increasing security of our applications because we want to very strictly, very controlled communication between our applications and partners or parties like Amazon Web Services and such. So this understanding of the need to extend our offering of managed service library in Cloud Foundry, we developed three principles that we need to follow to create our framework to extend the library development efforts. It should be easy and simple to use because we need to continue extending the library. And the last but not least is support service life cycle. Particularly we need to be able to update our applications without any major disruptions and data loss. So with this in mind, we decided that we want to use three building blocks, mix three building blocks together to build our framework for managed services. And those building blocks are Cloud Foundry, Docker and OpenStack. Well, OpenStack is a very convenient infrastructure as a service platform that allows us to add compute or storage or network resources to our managed services platform as needed. So that is a perfect tool to support organic growth. Docker is here, well, it's just because it's Docker, right? Everybody loves Docker. We want to have Docker here. That is actually just a half joke. We were able to justify, we were able to justify the presence of Docker here. And the justification is Docker provides partability, so you can develop Docker containers and you can guarantee that it will be run consistently across different environments. Second is Docker provides just the right level of isolation that we need. And it's very economical to run because we can run multiple Docker containers on the same VM, it doesn't have much overhead. Also, Docker is convenient because Docker helps to support application life cycle. We can do updates, we can use Docker images to manage our service life cycle. So with these building blocks, we need to put some glue together to build the solution. And here on the right, you can see a pool of OpenStack VMs that we run on OpenStack. And each of the VM at any point of time might run multiple several Docker containers that actually each Docker container represents a service. To manage the pool, we have created Docker Pool Controller. So Docker Pool Controller is responsible to track and manage all the resources in the pool, including VMs, including Docker images, Docker containers, portal locations, storage. All this is managed by the pool controller that contains of three elements. Container Manager, Database of the Resources and Capacity Manager. Capacity Manager provides constant, it evaluates capacity of the pool and ensures that at any point of time we have enough resources in the pool to spin up more services, to spin up more containers. So this way, we don't need to wait for a new VM to boot. We already have pre-provisioned enough resources for next few services to start. And Container Manager is the core of the solution. Container Manager is actually responsible for spinning up, bringing up new Docker containers and services inside Docker containers, or tear them down based on the request from the consumer of this resource. And consumer is actually this element. Altogether you see here is Service Broker. So for those who are not familiar with the Service Broker interface and API in Cloud Foundry, Cloud Foundry controller on the top. When it needs to provision service, it talks through Service Broker API. So Service Broker API is very simple. It's literally like five breathful calls that needs to be implemented. Service Broker API is defined how Cloud Foundry controller requests new services. That API is easy to use, but it has nothing to do with actual provisioning infrastructure. So that's why we put Docker Pool Controller to manage all the infrastructure elements. And once we have Docker Pool Controller, adding new horizontal here, pieces, these are our services in the library, becomes a trivial task. Just as an example, this is a technical conference, right? I want to show some example of a request response to Docker Pool Controller. So in this case, the Service Broker is asking. Go and create a new Docker container using this specific image, Comcast Logger in this example. Allocate one gigabyte of memory for this container and expose a couple of ports, port 80 and 5000 to the consumer. When Docker Pool Manager gets this request, it checks inventory of the resources available. It identifies the VM that can run the specific image that has enough memory and resources. It allocates ports for port mapping, port assignments, and start a new Docker container. Then it returns back to the requester information about that container, how that container can be accessed. Not exactly not the container, but the services. So it provides entry points to all the mapped services back to the requester. So that was a sample of this call on this API of Docker Pool Controller API. With this all elements in place, we now fulfill all our three goals. We can very easily extend our offerings for managed services because implementation of this layer becomes trivial. And we do all the provisioning of the actual infrastructure through very simple straightforward API. We have scalability, things to OpenStack and capacity manager. And we have ability to manage life cycle of our services through the mechanisms that provided by Docker. That's it on this part and the next section I want to pass to Sam, to my friend, Sam is from engineering team and he is going to talk about how introduction of a Cloud Founding Platform and Service changed mindset of engineering and support team. Pretty busy slide there, so I'll give you some time to take pictures. So hello, my name is Sam Guerrero and as Tim mentioned, I work on the Cloud Engineering team along with my colleague, Neville George. Today, I want to spend a little time talking to you about our experience from an engineering perspective with implementing Cloud Foundry. First, I'd like to thank everyone for the opportunity to let me share a little bit of our story with you today. This is my first Cloud Foundry Summit and I'm really excited to be here. So Comcast, we have a really small engineering team compared to the enormous virtual footprint that we have. So the thought of bringing in a new architecture was a little daunting for us at first. We thought there's a lot of things that may change for a service model that's been really successful for us, but I had to run myself. That's kind of something I was thinking about 12 years ago when I was handed eight servers and asked to see if I can get VMware ESX to run on them. So over the last few years with infrastructure as a service team, the focus has really been how quickly can we deploy VMs? And then how can we automate those processes? Well, that's great for most teams and it's really an obtainable goal. But it leaves our developers and application owners, our customers, with quite a few tasks to have to complete after receiving their VM or a group of VMs. So I'm sure as most of you know, receiving a new VM kind of leaves you with a little bit of a black hole. I mean, you have a nice VM, but there's quite a few things to do with it after that. So we wanted to kind of change that for our customers. With Cloud Foundry, we've introduced a paradigm shift in thinking for our architecture and engineering teams. We want to change our mentality where we really focus more on the end product of the services we provide versus just kind of deploying a VM quickly. We have to really focus on lowering those barriers of innovation for our product teams and our developers. So with Cloud Foundry, we really introduced a self-service model to our teams for application and developers. Well, that's really decreased the time between release cycles for these teams and really help them out. But the key to that agility is really a careful coordination between developers, architecture, and engineering. We have to be more involved end to end now to make sure that we are part of that process to offer more of a holistic service model and service offering. So, and we do that by kind of inserting ourselves further along the assembly line, if you will. With Cloud Foundry, it's really offered more of a self-service model for our application and development teams. So with that model, what it's doing for us is it's actually it's allowing us to be more engaged. And what we have to do now is we can no longer say that it's okay for us to give our customers a brand new car that they have to go home and assemble the transmission before they can drive it. We believe that if we make our factory better, everything else will improve. So we have had some technical difficulties or challenges, not difficulty, but challenges with most new things introduced in Cloud Foundry. Some of those challenges have been having to maintain our CMDB to really reflect back from Cloud Foundry to our applications. Before is really easy. We had an application that we would map to a VM that we'd map to an application owner or a group. Another thing is with network. So we've had to really expand a lot of the services we provide by now getting more involved with firewall and GSLB and load balancing, things that we really didn't do before. There were really more on the application owner to figure out how to get their VMs to run. And then finally, just maintaining Cloud Foundry itself, learning how to deploy build packs and create custom build packs. How to introduce new stacks. How we were going to maintain just the releases of Cloud Foundry in general, which can be a little bit on the aggressive side for a team like ours. It really weren't heavily involved in a lot of open source or community driven projects in the past. So a lot of that was new to us. So we found that these technical challenges weren't really as big as we thought they would be. And they've actually given us a lot of new opportunities that we didn't really expect. We've learned to really interface more with our customers before we were just kind of in our engineering hole. We gave it a platform and it was kind of your VM to take care of from then on. It's also helped us understand more about how the products that we provide and the services we provide really go to the end line. What we're trying to really do at Comcast. So it's helped us understand what our application is doing, how they affect the business, and how we're more part of that process now. And it's also helped us become more T-shaped engineers. It's really increased our set of skills that we have. And it's really helped us kind of get developed and learn this new model that now we're part of, this DevOps model that's really exciting place to be right now. So our experience with Cloud Foundry so far from an engineering perspective has really been positive. I mean, it's really helped us learn a lot of new things and it's helped us really focus and learn about all these products and really the end goal of agile product development and time to market. So with that, I'd like to thank you one more time and I'll pass the mic over to my friend, Neville George. Thank you. Hi everybody. Hopefully you guys can hear me, right? So my name is Neville. I work on the Cloud Services Engineering team along with Sam. I would say Sam's a very nice guy, right? Every time Tim and Sergey come up with ideas, we still have to support them and keep our sanity. So it's very nice of him to do that. So what I will do today is talk about some of the operational aspects of Cloud Foundry that we have found in our Cloud Foundry environment at Comcast and some of the tools and things that we have done in our environment in order to support the Cloud Foundry instance that we have at Comcast. I'll talk about some of the proactive monitoring stuff and also about visibility into your environment. It's related to Cloud Foundry and how they have helped us, what we have done, what are the tools that we have used in order to support the environment. So starting off with proactive monitoring, the success of any engineering team is in its ability to actually prevent an outage, preventing, proactively monitoring, looking at the key performance indicators to know what is building up in order to make an outage. In addition, it would be great if you can actually reach out proactively to your customers or even better if you can resolve problems. So for example, customer coders, for example, right? If they are developing, they are innovating, and they're starting to run out of coders, if we can manage that and make sure that they have enough space and stuff like that, it definitely helps. Helps avoiding that midnight call, escalation call saying, hey, we're running out of space and things like that. Also, in respect of how proactively you manage an environment, it's inevitable that will be outages, right? So when an outage occurs, the most important thing is to make sure that it doesn't occur again, right? What are the additional configurations that we can help and proactively manage all these things before we actually complete handing this off to the operational team, right? So we have actually chosen Nagios for our proactive management. There's a lot of information available for you to configure what you want to monitor and things like that. Now, it might seem very simple, but in a very traditional company. Most of the time, you have off the shelf monitoring tools that are run by a monitoring team that has an SLA and that has an intake process and all this takes time, right? So what we have done is, like Sam mentioned, the T-shaped person, right? So we manage the complete instance of Nagios and we make sure that we set up all the counters and key performance indicators that we need to monitor. So in case there is a problem and we feel that, hey, X is not being monitored, we'd be able to actually do that in say five minutes. They're supposed to like the OLA's and SLA's associated with the team that is outside our control. So moving on, we'll talk about the visibility of the environment, right? It's very important that we understand what is that in our environment and things like that. So Cloud Foundry has a great CLI that you can use to get a lot of information. The only problem is that it's not a single pane, right? Where you can see everything and click through everything. So we have had the same problems, right? And what we found is, we found a tool called Admin UI tool. It's available in the Cloud Foundry Incubator that we have used. Before I move on, show of hands on how many of you know about the admin UI tool? Okay, great, great, we have a few of us. But for everybody who doesn't know, it provides a GUI interface for knowing your organizations, your spaces. Who has access to your spaces? How many spaces you have? Your quotas, what are the DEAs, how are they being utilized? Utilization metrics of your DEAs? How many applications are running on it? It also shows you the growth of your environment in terms of organizations and spaces and over a period of time how your environment has been growing. It also aids in certain operational aspects. So you could create organizations using the tool. You could apply quotas to your organizations and things like that. So it's been a very useful tool for us. So that pretty much is everything that I had on this slide for us to talk about. I would like to close by saying Cloud Foundry has been great for Comcast. Having the T-shaped people as well as having to run your own business kind of mentality has definitely helped us make it better. So that is the end of the presentation. Now, I think we have a few more minutes for questions. So we'll start taking the questions. All right, yep. So production is what we're going to do. Yeah, go on. Yeah. I think he has it turned around. Just make sure the mics are all. Yeah, sorry. Yeah, so we are running in production. I actually forgot that part. Yeah, so we're running in production. We have several key applications that are in production today. And we have a couple of environments, and we're scaling it every day. So we don't have any, I wouldn't call it a huge environment at this point, but we're definitely ramping up. And we have several application teams because of this platform and its usability are very interested. So we're going to be ramping up quickly. Yeah, you mentioned the automation you put around passing through the domain using the GSLB. Did you also automate the configuration of the setup of the GSLB until I get there so you can do automatic provisioning in your own? No, well, it's, yeah, so we don't own that part of the network stack. So there's several options. Well, there's a lot of services out there that are also self-service. So there's options. I know a lot of some teams that I've talked to can leverage Route 53 and that would work well in this scenario if that's the point. Let me jump on this question. So we actually, we developed two models to do that. One is for a simple use cases, we can have a centralized GSLB manager with a same names map to centralized GSLB management. That will be work for all applications that want to use this model. So this is not application specific. But if any specific application needs very specific health check and specific rules to fail over or share GSLB, then they still today has to do it the same way as they did traditionally with the known before platform as a service was introduced. So two solutions currently. Processes you put in place for training developers on how you built the environment, what to use. Maybe all developers are using Docker these days, but how do you tell them? How do you tell them? Yeah, I can't say we have we have a really good training model. But we do, we do onboarding, we do onboarding sessions with our development teams. We do brown bags to introduce to do some brought over to have people aware. We focus on 12 factor application model because I think that that is very important on overall microservices model how not just to shape your application but also to shape data. So we have all this that not very structured training with a development team and to explain because developers needs to understand the difference in how they need to develop applications for past compared to how they did yesterday. And we have some of our developers here today. So if you reach out to us afterwards, we can hook you up if you want to talk to them as well. All right, I think we're out of time. Thank you so much.