 Okay, now for something a little bit different. So I'm Matt Haynes and I lead the cloud groups at Time Warner Cable. With me is Jason Rural, he leads the OpenStack team. You've been hearing from his team several times this summit, talking about how we did OpenStack at Time Warner Cable and some of our challenges and successes. What we thought would be interesting and largely for hopefully an audience of folks who are trying to look at OpenStack for enterprise use is talking about the culture of your company and what installing OpenStack into that culture possibly does. And then things you can do to maybe make that transition go a little better. So what we'll do is introduce a little bit of just really high level the OpenStack project for Time Warner Cable, why we did it and kind of where we are with that. And again, I'm gonna be really brief with that. If you missed the technical talks that Jason's team has been doing, they're all online somewhere that you can go look at them. I encourage it to really get into the details as I said. And it's probably important to make this caveat at a technical conference. This is not a technical talk. So let's talk about the culture at Time Warner Cable. But let's start with a little bit of where we are. So we started not too long ago to stand up OpenStack at Time Warner Cable. And we, Jason's team spent about six months from whiteboard drawings to a production cloud up and running. And that was a phenomenally fast stand up with some of the operational requirements that we gave them for what I call enterprise OpenStack. And that was, I wanted a version of OpenStack that, look, we all like to, we all know the cattle and pet model. And we like to think that all of our customers are gonna be these sort of brutal cattle folks who have elastic applications. And they spin up and destroy VMs on demand all the time. And the reality is, if you're doing this inside of an enterprise that's existed for a while, and especially inside of an enterprise as fresh and new as the cable companies, you need to be ready to support a certain kind of application. And that's the application that assumes the VM really isn't gonna die. And that it's gonna stay around for a while. And they do give them names, and they really are upset when you kill them. So there's a blend that's at work here. And I think it's really important when you do this for a company that you kind of understand what you're getting into. And not just go with a sort of a certain open stack level. It's gonna be this one way where everyone kills and destroys VMs and does horizontal scale and all that. But really we could take pretty much an off the shelf application that expects the VM to live and have a high success that it'll live in our environment. So for enterprise open stack, the big things that had to show up, we needed a geo redundant footprint. We have it in two data centers, in our two of our national data centers in the country, object storage is shared across, is replicated across. Our identity is global, and the biggest thing and it's come up in our self talk and several of other talks is live migration. I don't think it's a commonly used feature in operators with an open stack. But for us, it's very important and it lets us nine times out of ten preserve a VM that we would otherwise have to kill for a reboot or any other kind of event. We can often do an evacuation of that VM and save that pet's life for one more day. So with those enterprise open stack requirements, the team was up and running in about six months at that mid year mark last year. So really impressive feet for anybody who's done open stack at scale. So in the last six months between the mid last year to the end of the year, it was really about maturing the platform. There were a lot of while we were up and running with these features, there was still a lot of operational maturity going on. There was a lot of making sure that the team could operate. We were bringing on new teams. So there was just a lot of overall maturity that was happening. And we were also working on the customer story and part of what we're talking about today here is how you go get those customers sometimes and how you get them over this cultural gap that you're going to face. So Jason's going to get into a lot of the details for that. But then now from the beginning of the year till the middle of this year, it's really been about expanding this platform. We've put a new underlay network architecture in place. We've added a new national data center that Time Warner Cable has expanded into. So there's a lot of expansion efforts now. We're adding more services. There was a designate talk here earlier today from Time Warner Cable. So this is kind of where we are in our path and it's going really well. Why did we do it, right? Why did we do open stack and why did Time Warner Cable actually do the broad cloud initiative that I was brought in for the company? And it really came down to these three propositions. The first one really is about making sure that your development community, the customers for us, can move faster. This is really about giving folks a programmatic on-demand infrastructure that they can start to use automation and to the extent that they want to start to develop a DevOps model. And build applications that way. They actually have the support from the underlying infrastructure at the company that they need. You heard earlier, we are supporting now a lot of what we might call the over the top video platforms. We write clients for iPhones and tablets and Roku boxes. And the change of pace that those run at is very fast. So we really did need to be faster for our developer community. Cost is always a big thing. The cable companies, I know I was a little surprised with how much money we spent coming in and so it wasn't surprising to me that one of the mandates that was given to me was let's bring those costs down. It should be able to run infrastructure at lower costs. Open stack in particular as one of the big tools in our cloud arsenal gives us a capability of not just because it's open source. That can be a bit of a fallacy sometimes because you still have to hire people. And sometimes you have support licenses and so forth. But because it is a pure software level infrastructure for us. And it allows us to create a commodity hardware platform under the covers. The hardware we do by now looks much more generic. It looks common across multiple vendor sets. I can get these vendors beating each other up sometimes. And I know they always appreciate that. And then finally, it was important for us to be able to build this in a way that created a reliable footprint for applications. We had in the past in the company lots of data centers. We're not shy for data centers. We're not shy for network pipes. We're not shy for most infrastructure. But nobody ever took the time on behalf of developers to put together infrastructure in a way that HA and DR kind of features just came with it. And as a result, the applications that support our video, broadband and phone services that run across the company ran with different levels of reliability for those major events. So our job was to smooth that out a bit and create some consistency in the way we provide HA and DR support to our applications. So that's what we're here to do. And with that, we kind of stepped into a culture at Time Warner Cable that I don't think is different than a lot of telco, to be honest. But it was different than the folks you go grab off the street that often are using OpenStack and they just were using AWS and something else. This is really where the mismatches that we've seen and what we're gonna get into today. So I like to say it's sort of an operate over engineer culture often. It's hardware over software. If there's a solution that can be created, if you can go buy an appliance to do something, then let's do that, as opposed to actually creating some software on top of something more generic. The culture was very much vendors over open source. It was and remains a very vendor friendly community. And we're not trying to be unfriendly to our vendors. But we are trying to bring in this notion that open source is an important part of the mix. I think all this kind of comes down to, for me, what it felt like was it was a culture that was built around stability over speed. And so there were a lot of things in the company and the way it operates in the process is that creates slowness for the sake of stability and the interesting thing. And I don't think I have to sell that here, is that's a bit of an illusion, right? And I think what most people realize is speed is stability. And so that's one of the big things we're trying to get across. So with that, that's about all I wanted to speak today. So I'm going to hand off to Jason here. And he's going to kind of take you through some of the things and the shifts that we've done. And I do want to indicate, we're kind of almost seeing hopefully this is like a two man panel and I'm hoping you out there maybe have a question about your company. And we really don't have 40 minutes for the talk. So please come up and ask us questions. All right, great. Technologists gets to talk culture. This is going to be fun. No, it's not. No, actually, there's a lot of cultural changes that we've undergone and are still undergoing with the role of OpenStack, both within the OpenStack team and what's more interesting just across the company. And I'm going to focus on three of them today. The first one is tied to application migration. Then we'll talk about DevOps transformation and then we'll get into some of the tooling. If we look at the application migration, this one's kind of interesting because we have a lot of traditional applications that are migrating to the cloud. And there's also, let me back up and say, we also took this cloud without a mandate that you have to move to it. So there was no stick. So it's all carrot. And these are the carrots that they added values. So everything that we've been hearing about speed, agility, that type of thing, those are the carrots. But those carrots aren't well known from an nutritional enterprise. So you have to get into a lot of education. There's a lot of concern about the reliability of the system. And so you need to demonstrate that it is reliable. And then there's a transparency component. I'm going to go into these a little bit. From an added value perspective, I'm not going to talk about the self-service stuff. But we added some things into our specific deployment to actually help these non-cloud aware applications to come and join the cloud. So we have the geo-redunit object storage, which is great for disaster recovery. A lot of the systems, I wouldn't say a lot, but many of the systems in Time Workable are kind of just sitting in one data center. And we lose that data center, we'll lose that whole app. So having a geo-redunit data store like this is very easy to do backups. And they're automatically available in another region. From the same perspective, we also set up multiple regions for our cloud. So our customers can now have compute resources in both regions. And they can do things like global load balancing between them and things like that to get nice HA. So they start looking at this and they go, OK, that's cool. I can see value in wanting to move to that type of environment. Live migration I have listed on here because, again, this is really an operator tool in my perspective. It really helps me do my job, my team do their job, and that we can easily upgrade the OS, do kernel patches and things like that. But from a customer perspective, if they have an application and it's only running on one instance, and it's a single point of failure, they can't deal with you having to reboot a box. And worst of all, that reboot, in many cases, based upon when Time Warner likes to do maintenance in the middle of the night, requires them to be available in the middle of the night. That kind of sucks. So having the live migration capability kind of alleviates that problem for them. So it's easier for them to have these less cloudy applications in the environment. And then things like anti-affinity for scheduling of your instances so that they can reside in a fashion that they're not all on the same hypervisor so we offer that capability. And then, of course, feature, feature, feature, feature. There's tons of new features that we're adding in, things that they're not getting in the other parts of the organization. So load balancing, DNS, monitoring, and so forth. Education is actually key. It has to do a lot of onboarding sessions. People just don't even know sometimes what virtualization is. We have a wide spectrum of customers. We have some that are fairly sophisticated and in what they do in their development teams. And we have others that, you know, they're looking for the form to fill out to ask for a specific type of instance and they want to know the fax number or the fax it to, right? And they come back to work three months later and hopefully they have an instance, right? So onboarding sessions were important. Meeting with the small groups of businesses to explain how to use this environment best for their type of applications. And then we did tailored training. So there's a lot of, you can get a lot of training on OpenStack from different vendors, third parties. But we actually wanted very tailored training because you can implement OpenStack in a thousand different ways in your environment. And we wanted our customers to learn how to use it in our environment. Like how do we do block storage, right? How does our networking work? This is very different, right? We use Neutron with ML 2 and VX LAN overlay. So it's a very different. So the tailored training sessions, those happen face to face and now they're rolling out virtually across the whole organization. And the users actually get accounts within our cloud in the production environment that can go and do labs and things like that. And we not only teach them how to actually just use the tools, but we also teach them how to actually build cloud aware applications. And we introduce them to Puppet and Ansible. And like, hey, you know, this actually works on top of the cloud too, right? So they learn things like that in FAQs and things like that. And then the last one is pretty important too. We have a lot of OpenStack experts within our team. And so we provide access to our team members to answer questions and help people along. Transparency, this one was kind of interesting to me. So we have some customers that actually own the infrastructure in the past kind of end to end, right? They own their machines and then they own the support and operations of it and then also the applications on it. And now they want to move to infrastructure as a service and they feel like they're losing some control. So they want to, there really needs to be this trust built up between those teams and the teams running the infrastructure. And so we've tried to move towards that and I think they've been fairly successful with transparency. So we've been doing things like provide dashboards. We provide health dashboards. We show the health of the system. We show the uptime. They can drill down and they can see if there are any events happening. They like that, right? So now if their application's not working they can go, well, is there a known issue in the environment? We have deployment dashboards. We deploy weekly sometimes multiple times a week and customers, some of our customers actually they want to know when those deployers are happening because we don't do them during the normal deployment or maintenance hours in the middle of the night. We actually do them during the middle of the day which is another cultural shift. We feel that deploying more often and frequently is less risky and doing it during the day when you have engineering resources around to work on things. If they should go awry, you're gonna experience less impact if there is gonna be any. So we make dashboards available to let our customers know when we're doing deployments and when they're completing. We have incident dashboards for known publishing, known problems, things like that. And then the last one is interesting is we're starting to introduce showback. Actually letting our customers know what they're actually using and potentially the cost of it. We think that this is what's going to, I mentioned earlier the stick and the carrot. This has all been about carrots. Just give them something cool and they'll start using it. But they will start exploiting it, right? So this is where the showback is gonna come in handy. They're gonna actually really see what their true utilization is and their cost. And that rolls up to the VP level and is very useful in budget planning going forward. The next kind of cultural shift has to do with the DevOps transformation. So the application migration, it was kind of more like taking existing applications and figuring out how to get them onto the cloud. The DevOps transformation though is more about some of the greenfield applications that we're seeing being built on the cloud because development is, the way people are developing applications is now changing because they're taking into account the cloud, right? So they're building cloud aware applications. They're thinking about things like fail fast deployments and things like that. So everybody knows cloud aware applications are very composable, extensible, they're elastic. They should be able to scale out and scale in so they need to be somewhat stateless. So we're getting more and more of the customers building those applications which is awesome, that's what we want. And the interesting thing is that if you build your cloud aware application correctly, like if you follow that last bullet, you make it able to tolerate the loss of a VM, right? If you build your application to be able to tolerate the loss of a VM and you scale it out horizontally, you can actually build a application that is pretty, pretty reliable. Maybe even more reliable than the traditional enterprise applications that are on very high end hardware where they're relying on all that availability from the hardware, right? So you can get to the four nines, five nines. And so that's what we're seeing. We're seeing people start to take advantage of that which is great. And when you start building cloud aware applications like this, you actually need to start thinking about automation because if you want to deploy out horizontally and scale back in, you need to do that in automated fashion and if you wanna be able to recover or be fault tolerant, you need to be able to build things on the fly. So you're talking about really automating the creation of your environment and the maintenance of it. So now you're getting into DevOps. So that's where this whole DevOps model takes into account. So now the application developers are actually thinking about operations, right? They're thinking about how is this gonna be deployed? How is this gonna be monitored? So I know how to scale it out and scale it back. So we have teams now that are on our cloud that are really moving to full DevOps models, which is awesome. And then as you move to the DevOps model and you have more automation, you're gonna be able to have more rapid deployments. Traditionally, as Matt said, deployments in time or cable have been very methodical, very slow, very deliberate. And so now with automation, you can really start thinking about, well, how can I deploy more often? And how can I think about more fail fast type of delivery mechanisms that if you fail, then just deploy again, fix, deploy again, fix, deploy again. And it's an iterative process. So that's, all right. All right, third and foremost is the tools and processes. So we've introduced as an OpenStack team a lot of tooling for CI CD and to get the automation for our cloud infrastructure in place. This tooling now is starting to find its way into other parts of the Time Warner organization. And we've, from the CI CD perspective, as well as some of the processes we have in place for rolling things to production. And then also the use of the tools are now kind of starting to be used outside of the cloud. So, and I'll get into that in a minute. I'm not going to describe our CI CD process flow. We've had presentations on that already, but we do have quite a few tools that we use in conjunction to do CI CD in our environment. And from an OpenStack cloud perspective, we stay very close to upstream trunk. We're able to pull in, contribute changes upstream, pull them in, test them, deploy them very quickly, and we do that weekly. And this same type of CI CD tool chain now, we're starting to work with like our network, our physical networking teams to introduce this to do change control and automation for the switches in the network, in the physical network. So we're really excited about that. And then we also have other teams that are building applications on top of our cloud that are making use of the same CI CD infrastructure. Processes, I talked about tools, but processes and having one is actually important. So you can't just randomly push changes to production, so you need to have a process. In our environment, we actually have this virtual development capability where we can develop our cloud on top of our cloud. Each developer can actually spin up a complete cloud even with the regional aspects. And they can do their own development work and know that it works. And then that gets submitted for review using kind of getting Garrett capabilities and then merges to master, and then we can roll it out through staging and production and so on. Some of these mechanisms here and processes here now are also being used by other parts. We have a video application team now that is actually using our same virtual development capability to do the development of the video application platform back in so they can rapidly spin up all their dev test environment on top of our cloud. All right, tools and processes. So what I mean by this is there are a lot of things that were needed from a standpoint of how do you operate as kind of a dev ops organization? Things like, how are you gonna communicate? How you can have a distributive team? So being able to make use of hip-check confluence, things like that have Ansible and Garrett, or Ansible and Puppet and even Garrett publish information within those channels so everybody knows what's going on at any time of day. Being able to have an on-call schedule with escalation policies, things like that. And these tools now are starting to permeate outside of the OpenStack team. So we have lots of other organizations now that are coming in that are like, hey, we heard about this cool internal chat mechanism that you guys are using, can we use that too? We're tired of using Link or Jabber or whatever. Same with some of the other tooling. And so that's one aspect of tool leverage. The other one is we've been developing infrastructure monitoring capabilities so that we can actually operate, support our cloud. So we've been a big supporter and contributor to Manasca. And we use that to monitor our infrastructure at mass scale. And that is actually now gonna be rolling out to our customers so they can monitor their instances as well as their applications because they can push custom metrics into the system. But beyond that, we actually have people in other parts of the company that are running things that aren't ever gonna be on our cloud. They're on physical hardware for various reasons. But they need the same kind of monitoring, thresholding, notification capabilities, right? They don't have to have the rest of OpenStack for that. So they can go and leverage this monitoring capability as well. So a lot of leverage going on. Good, so yeah, I think just to kind of wrap things up and we'll open it up for any questions. If you're thinking about putting OpenStack in your enterprise, we obviously recommend it. That's what we've done. But I do think it's important to pay attention to the cultural implications that are gonna come with that. And that's gonna mean you're gonna have to be appreciative of the kind of customers you're inheriting and what they're gonna need. And then also, and Jason mentioned some of those, one of the really cool things that happens is that the tooling and the methodologies and the mindset that your DevOps OpenStack team brings starts to trickle out into the company, right? And it starts to find interesting little ways to infect all these other teams in what I think is a positive way. So just to mention, changing the technology, I think we're all technologists and we like to wrap our brains around technology and think there's really good problems there. That's really the easy piece. This culture thing is tough. I think you need to have executive backing. And I think trying to change culture in a company without that, it's pretty difficult. Luckily we had that. And also, while you're here and while you're making change, and Jason referenced a couple of times, we do our deployments in the middle of the day and that's totally a different mindset than the company has. But one of the things I'm trying to do is to say, get used to this change, right? Change is the new default culture. We're going faster. And I alluded to this earlier, right? It's to build, there's this notion that we should be stable over quickness and what I'm trying to teach people is that's kind of the opposite way to look at it. You really get stable when you're fast. And so change is really the default culture for us now. We're certainly not done and there's certainly a lot of corners of Time Warner Cable that I think still need some of this DevOps mentality to infect it. But I've been really happy to see what this team has not only accomplished but how that started to really change things at the company. So that's it, we'll stop there and ask if you have any questions. You just stand to a mic. If you can't do that, I'll try and repeat the question and we'll leave it at that. Just want to ask how you funded this. How did you get the CAPEX? How did you get the approval to move forward? These kind of initiatives are hard to fund. I didn't quite get the question. How did you get your funding lined up? How did I get my funding? And CAPEX and how did you make a business case or did you make a business case? Yeah, so I stepped into this role with some of that business case already having been made and so we had the CTO behind the idea that this is something we should do. The funding piece of it itself was a little more interesting. I came into this and maybe this isn't too unfamiliar to some of you but as a team of one and I had to go steal my budget from a lot of people who surprisingly aren't really willing to give you a budget. So I kept repeating the name of our CTO a lot and I managed to cobble together money. Frankly, a lot of it is just, it's not as expensive as you think to I think to also put this together. You can get an early version of something together for a pretty small fraction of a typical CAPEX budget at a company like Time Warner Cable. By the way, there's a panel at 430 I think that is also gonna be talking about business drivers and some of the ROI and the other considerations. I'll be on that along with some folks from DreamWorks and some other people. So I encourage you to bring some of those questions there as well. You mentioned the carrots and how you're starting to do the show or the kind of charge backs almost essentially to the different internal teams that maybe are moving things on to your OpenStack cloud. Was that one of the early carrots? Or like, what was the motivation internally for like your first one or two big internal wins of getting something onto OpenStack? Were they looking at it like, okay, I'm able to save some huge internal costs by moving on to this? Or if you don't have any charge back or anything else, what was that other prime motivator to have your first couple of big wins? So the showback wasn't really a carrot. It was a stick. Well, it can at least turn into that when we actually have actually charge back, right? The showback though, did provide insight to the teams on what their actually utilization was and the uptake. And I think the other part of your question was about how to get the first few big projects going. That was actually not too hard. I mean, we had, there were some people that were clamoring for resources. So there's, in some of our data centers, there's doing the resources, like there's just no more hardware space available, right? And we had set up a lot of capacity for our cloud, right? And it was just like, there it is, like you can go have it, right? So it was real easy for them like, oh, we can just kind of go use that, right? So it was the availability, really, and then kind of all the other value ads that really got people jumping on it. Yep. From a shifting culture perspective, which is more important, building the cloud or having the cloud? And if you had to go back and do it again, would you build your own again or would you partner with someone to provide you a cloud? So that's a good question. I'll get Jason's take on it too. Mine is that, and by the way, at Time Warner Cable, this isn't our only cloud technology, right? So we actually have other elements of technologies that are under what I'll call cloud. What I will say is, and what we tried to talk about a little bit here was there were some really interesting side effects that showed up because we built our own cloud here because it was the DevOps team that Jason brought on board that started to operate totally differently and do deployments in mobile day that really freaked people out. And it was the tool chain that they brought and all of a sudden an app development team that wanted to get a little better at this had a tool chain and a bunch of people knew how to use it that they could just start to use. And so by building a cloud like this, I think it really helped infect the culture a little more than it would have been if we had just backed it up and parked it and said, here's your new cloud, what do you think? Yeah, I would agree with that and to answer the other part of your question, like if we were to do it again, would we build it or partner? I would absolutely build it again, right? That's your job though. Yeah, that's job security, but no, no, really. I mean, the reason we decided to build it to begin with it because we didn't want vendor lock in, we wanted, and we wanted to have a flexibility to expose the features and the configuration we wanted to and if we saw something new coming quickly in the community and we wanted to move to it, we could go do it because we'd have all the CICD in place and we could go and just affect that change ourself and not have to go work with the distro vendor to say, when are you gonna have that in? Oh, it'll be in six months in the next release or something. So this opensack's moving very fast and a lot of it's still immature and it's still kind of figuring itself out. And as long as it's in that state, I wanna be able to stay as close as I can to trunk and be able to control my own destiny. So back when you guys formed in 2014, when you formed the opensack team, what was the composition of that team and was it comprised of, did you kind of reach outside to get opensack skilled people or did you allow some of your folks from say network engineering or systems engineering to kind of graduate up even if they didn't necessarily have the skill set to start with? Oh yeah, it's a good question. So you've heard the expression, two guys and a dog. This was kind of a start. And like I said before, we started as a team of one, me and I'm not very useful. So I quickly needed to go find some useful people. The first place you do start to look often is internally and we went around and found some really key individuals in the time of our company. And I wasn't so concerned about the skills. I knew they didn't have opensack skills per se, although actually one did. But I was more interested in the attitude and was this something that was gonna be exciting for them and was this something that they would be engaged with? So that's a good start and that's actually where we did start and there was a story I told the parents about how we got started and everything and it was a bit two guys and a dog at the beginning. But you do have to recognize, if you're gonna do this at this scale at a company like Time Warner Cable, you have to get professional real quick. So most then of the expertise that I have brought, or actually brought Jason in and Jason has then brought in folks from outside of the company that have had open source, open stack kind of experience. And I think that's really what's been the accelerant for the company. And yeah, I'll add one other thing. So we have a, we've been striving for DevOps, right? And so we have a good mix of very, very strong like software developers that was kind of the realm they came from, all they did. And then we had some really good operations folks. And I've really been moving both of those sides of the spectrum to the middle so they can actually mesh. And I think that's really what you need. So just one quick follow up to that. So have you run into any challenges with this kind of newly formed open stack team and your traditional infrastructure teams? And if so, how did you guys overcome any of those kind of, there's so much dovetail there. And you've got two teams potentially trying to dictate from an infrastructure standpoint, the future. Yeah, so from my perspective, it's not so much trying to dictate the future. What we're trying to make sure, so I'm responsible for the infrastructure for the company, whether it's the subscriber side or the sort of the classic IT side. What it really comes down to is just the realization that there's no one right answer for everybody, right? What somebody who on the IT side of the house needs to host a off the shelf Windows application looks a lot different potentially than, you know, the team standing up twc.com. And so the approach really is provide what the my customers need. OpenStack was kind of a really new piece of it, right? There was nothing at the company. We had bare metal and lots of networking. We had virtualization technologies and things that started to approach cloud. We had nothing that looked like elastic programmatic infrastructure, right? It just wasn't there. So that was the gaping hole that OpenStack had to come in and fill. So now when you look at our portfolio, I can take any customer in the company whether they're doing services for, you know, five million subs or whether it's, you know, it's a back office IT application and we can kind of slot them correctly into the cloud architecture. What's really important though is that everything start to behave like this service model. Not, you know, I wanna bare metal box, please go rack it up for me, right? So that's really the sort of transition. So I don't think it's quite as competitive as that. And so all the rest of the folks we have, you know, are still busy. They still all have jobs to get done. There's a lot going on. The OpenStack was really additive. Any compromise on the performance or stability of the applications that move to the cloud? Considering let's say network implementation in a software versus hardware. And the second question, do you have any issues integrating cloud applications with existing legacy applications that are not so green field friendly? So I think the first part of your question was about have we seen any kind of performance limitations? We have, people ask all the time, what are your workloads? And I'm like, well, we have everything. So we didn't build specific to one workload. We were general purpose. And so we have people running traditional kind of, you know, web property type stuff in there. We've got people running the video application backend systems, TWC TV, which is our video of our IP system has a lot of its components on there. We've got people running just typical DevTest. We have somebody running Splunk, a huge Splunk cluster on there. All those customers to date have been very satisfied with it. Now, I can guarantee you we have things running at a time where a cable that will not run today on that in that environment. They will run into some performance issues, some of the encoding systems and things like that, especially the real time related things. So we are open for business for any workload that wants to go on there. And we do want to have feedback on kind of their experience so we can tune and provide the type of performance they want. We can certainly change things, right? So, but today we haven't really had any real performance issues. And I think the other part of the question was network connectivity into kind of back office systems, was that, or legacy systems? One of the things that we did at Time Warner Cable because we wanted customers to be able to spin up an instance very quickly, right? But we didn't want them to then get stuck waiting multiple weeks for someone in networking on the firewall team to open ports because it kind of defeats the purpose of being fast on spending things up. So we've tried to work with the networking teams to kind of open up everything, all the Time Warner Cable back end systems to the open stack environment. There should be open connectivity and so the customers themselves can control access with security group rules. Now, so there's access into open stack but you don't always have the access back from open stack into some of those legacy applications because they do have firewalls in front of them. So there are instances where they do, our customers do need to just kind of work with the security teams to kind of get that those ports open and things like that. So they're still a little time lag with that. Yep.