 First thing I want to do is thank some of the people who are in attendance today. You're going to have three people from the West Corporation talking at you about our experiences with Pivotal Cloud Foundry. But there are a few pivots that are survivors of our project, whom I want to thank as well. And it's great because there's a bit of lineage. We've had several waves of platform and application transformation dojos that we've been doing with these folks. And it's about, oh gosh, two and a half year relationship, Jason, something like that. And there are people that represent early 2018 engagements, people who I think that first instance was in a shoebox, in a closet somewhere that we actually built out and then went from there. But just very early levels of engagement to the much broader enterprise work we're doing now, relatively recent arrivals, which is, we thank you all that we're able to stand on this platform because you guys help drag us there. So we thank you for that. Let's go ahead and get going. West Corporation, we're going to give you in a minute or two how we define ourselves as a company, why we decided to adopt Pivotal Cloud Foundry and work with these fine people and others, some related projects that we do too, because if you're going to do a big giant replatforming effort like this, it's not ever going to be the only change you make in your organization. So we'll talk a little bit about that. And we've got somebody who represents platform architecture, and we've got someone who represents one of the most important early adopters of an application team, one of our first and most important tenants that, well, the way he likes to put it is he ripped up anything that wasn't bolted down in our environment. And so that patient zero I'm extremely appreciative of as well. And so we're going to spend some time having both of those people talk to you. And then again, we really want to get to your questions because we know that this is an endlessly interesting and pragmatic adventure to adopt something like this. And you might be right along with us. We're about two and a half years in. So I think on some sort of maturity model, we're somewhere in the middle, other customers are much further along than we are. And there are some that are just getting started. And we probably have questions in both directions. Looking forward, looking back. We appreciate yours as well. Quick introduction on Terry Miles. I'm a product manager, platform owner of PCF at West. This guy standing here is Don Faust screen platform architect. And this man standing here is Jason Nash, that product development manager and owner of a key initial adopter. We're big. We're not huge, but we are big. And there's an unusual flavor about our software portfolio. We're the biggest telco you've never heard of exclusively business to business. We get in behind a lot of stuff that might touch your lives already. If you have kids in school and you've got some sort of mobile app or you get some sort of SMS service to come pick up your kids because it's snowing and you got to go home early, that's about a 50% chance it's our system, but that's sending that notification out to you. If you ever are unfortunate enough to have to make a 911 call, there is a very good chance that that is on our system. If you have a prescription to CVS and your pills are ready, that's going to be us sending out that text message to get it back to you. So we're big. We're not huge, but we are unusual in that a lot of our applications use UDP, instead of TCP for example. We are an extremely heterogeneous group that I'll go into a little bit more detail about on the next slide as an application portfolio. And we are extremely distributed geographically. There are lots of multinational companies and we're one of them. But what I'd also say too is that our product teams and our project teams are often very distributed among multiple continents. It's not just that this product team sits in this country and that product team sits in that country. We are a pedigree of many different lineages because we have been a holding company for a long time. We actually started out 20, 30 years ago as a customer call center. And we just kept buying things, growing by acquisition. And the general policy was thinking like a holding company, buy something, let it lie. Don't mess with it. Try to optimize it, make it more profitable. But we were not trying to integrate any of these platforms. This adoption of PCF isn't the first time we've tried to consolidate our platform. But it is the most earnest and definitely by far our most successful effort in that regard. And as you can see, all of these different acquisitions that you see on the right represent not just product but product lines. Different decisions about extremely dissimilar architectures among these applications and platforms on this list. And so trying to bring them all together. And even just the challenge of trying to select who would go first and who we try to build out for. These were tough things for us to consider. But we're through a lot of that now. Now we're in blocking and tackling execution phase. But before we got there, we had to do some design work and we had to talk a little bit about architecture. And as we did that, we realized that if we were going to change the way we thought about ourselves from a holding company to sort of operating or a kind of software company, there were going to have to be not just replatforming considerations, but CICD initiatives, changing the way we listen to our applications and standardizing the way we do so with an enterprise adoption of New Relic. And meanwhile, like almost any other business you talk to in this regard, we're trying to change the way we go about doing our work. We can call it agile. There's lots of different names to it. But just fundamentally understanding an iterative nature to our work and trying to find some greater way to get people to lift up and out of their teams and start talking to one another because anybody who did that to any degree at our business would realize we had a lot in common, a lot more in common than we thought we did. So to stitch it all together, we'll have Don talk a little bit about how design went. Hello, everyone. So as Terry was saying, some of our biggest challenges were trying to bring our company and our developers together as a whole. When I worked in the development community at West, I really had no idea what other groups there were. I didn't realize that there were actually three other groups within West that had perhaps a 40% overlap in terms of the types of services that they provided as the application group that I was in. And so we really wanted to have this as a two-pronged thing because we were so fragmented in our development and our infrastructure and our operations. We needed a way to bring all those people together and we needed a platform that could provide both a very opinionated manner of doing that as well as flexibility. We were stuck on old school VMware architecture that took forever to spin up VMs. Don't get me started on four weeks to get a VM that you can't actually log into because they forgot to install your user on it. All those types of things were just rampant at West and we were really struggling trying to innovate faster. And we did have some of our newer acquisitions, of course, were already on the cloud, especially AWS at the time, but they were really all over the map in terms of their maturity model for CICD, for monitoring, etc. So what did we want? We wanted a platform that would be opinionated enough to bring us all together in terms of starting to build a culture of development of how we are going to do things that would move us to cloud native without necessarily hitting us with a hammer every time we tried to do something. But we still needed flexibility. So that's where because our apps are UDP based, they're hypersensitive to latency, a lot of them. We needed to be able to do that in places that maybe the opinionated model wouldn't quite work. We wanted to provide guardrails rather than traffic lights for our developers. Our system up till then had very much been stop, write a ticket, wait for some amount of time, an opaque amount of time, and get a result out of it. We needed something where you could go in and you could get as much of it done by just clicking a button. We needed something that we could provide across the entire globe. We have centers and business in APEC, AMIA, North America, South America, Africa, everywhere. We needed something that we could stand up all around the world without having to necessarily own our own data centers there. We really wanted to move this to a single operations group. Our operations at the time were incredibly fragmented. It was very much every app had its own operations group and sometimes they'd end up fighting with other operations group for resources. And if a problem came up there was a lot of pointing of fingers, et cetera. And by centralizing that we thought that we'd be able to provide much better service to our applications. We of course wanted to increase our velocity, the quality of our software as well as the quality of life for our operators and developers. And to have visibility into what our platform was, what was happening, to decrease that opaqueness that we saw when an issue would arrive, we would need to bring people in from every group to look at it. So we went through a competitive POC and we decided that for us the answer at that time was PCF which has the part that's now called PAS, but we were also really excited about PKS coming in because that would provide that flexibility, the ability to do the UDP, all those things that we really needed. And in our POC we went ahead and installed both on-prem and in the cloud. We brought in developers from Greenfield apps, from Brownfield apps, and some real hardcore old legacy software so that we got an actual look at what problems we might start hitting when we ran it as a platform. We realized that none of this was going to do anything if we didn't have a CICD platform to run on. We had groups that did not actually even have their code in source control before this. So that's the level of maturity. We also had people who had push button, CICD, true deployment pipelines that ran automatically when something was checked in to GitHub. So we had everything along that but we needed to bring everyone up to that basic level. And we wanted to provide an easy way for apps to be monitored and so we brought in New Relic at the same time. So we've done all this and some of these things we did well and some of the other things are still challenges and we're trying to do better. And so things that I'd like to talk about here is either things we did early that were good or things that we should have pushed even harder on. And the first thing is you should bring in infosec as early as you can. As we didn't have them as part of the POC, we should have had them as part of the POC. That's the level of input that you need. That is still our largest speed bump getting approvals for egress from the foundation into other systems but bring them in. It's something new. We're still training them. They're learning to love this. They're seeing how we can provide this security infrastructure as code and the good thing is they're really supportive of it. They're just a little bit skeptical that it's really going to work. For us determining how we were going to set up our foundations and our orgs and our spaces within the foundation was a big deal to us. That's something we did well. We have dev test foundations. That's where the developers check into and iterate on their code in the dev level of the dev test foundations and then they need to have a CICD pipeline to push it into the QA side. This is initial QA so it's running on non-prod data, et cetera. And then we have production foundations across the world that contain multiple spaces there for staging, which is the final pre-prod testing and then production spaces. We also decided to provide isolation segments for our public work load so it ends up being a DMZ for our applications. This was a request from security and the networking people and with PCF this was an easy thing to add. We are still experiencing some difficulties with skilling up people in our company. This is a new platform with new requirements, especially in our operations group. It's really exciting to see them realize that they're actually engineers as well and their real job is to provide automation more than anything else. And having that plan to do that bringing those people in early would be another great thing to do and not just operations but remember your QA people are going to be running on the platform and there's some new things that they need to understand about the system as well. Early on we started classifying what workloads were going to go where. That started a little bit later than it should have. Honestly it would have been better. A lot of our business plan that drove our acquisition of this was sort of back of the napkin. But earlier you can actually get to the level where you understand what all your apps are and what they do would be a great thing to do. Remember that without CICD you have a great platform but it's never going to achieve the velocity that you need. You need that reproducibility, you need that ability to go back in time if everything doesn't work perfectly. And you need to be able to monitor the platform. This is something that we knew was going to happen. We struggled a little bit with it. We went with the open source Prometheus and Grafana solution and it's good but we still see some challenges there. Be sure that you can monitor your apps and that's where New Relic comes in and is super strong. We made a determination about our databases. We decided that we weren't going to host them within PCF. We're going to consider them to be external services. That has been a good choice for us up to this point because we have so many of these systems that are linked heavily into legacy systems and it's been easier to leave the data where it is currently and build a foundation close enough to it to be able to access it. Last of all is where do the logs go? We definitely, that hasn't been a strong point for us. That's a lot of data and it's really an essential part of your applications. We had some applications using logs for things that logs really aren't meant for like billing events etc. And that's a big deal to let them understand. Anything that needs to absolutely persist should be through a persistent store. Logs are about debugging events that are happening and analyzing what's happening. It is not so that you can charge your customer for a phone call that they made. And so the final lessons are automation is not optional. Be ready to train your people to do it. They're going to learn to hate YAML just like I do and most of us do I think. When we went, we're now on three different public clouds. We have a recommendation to not use the proprietary interfaces to things like the databases as a service that they provide on those platforms. We still have this goal of making it as easy as possible for us to move those applications to a different platform. So it won't be the application deciding. It could be a business decision as well. We don't see that really happening that much but we have that flexibility to do that if we need to. And of course your culture is going to change with the tool. That's been one of our greatest achievements is we now have many of our developers who talk across groups which means across the world. In fact they now know a lot of people that are doing the same type of things as they are and it's making our API gateway a lot of healthier because people realize we should be using one best of breed service, et cetera. And you have to know that you just started no matter how good your plans are. They're going to change. Be ready to do that. And the last of all is that it is really important for us that we had true partnerships with both Pivotal and New Relic. We really couldn't have done it without those two companies and the way they have not just supported us to get more money out of us but really taken that step to when they see a problem even on the horizon they reach out and try to help us. And now we're going to have Jason Nash up here to talk about what they ran into when they started using the platform. Hi everybody. I know we're coming up on clock but that's okay. I talk real fast and I'm highly caffeinated so we will we're going to plow through these things real quick. The main thing I want you to take away if you don't get all these things you can follow up with me after but the main thing I want you to take away is the Foundry is pretty much just going to work. Your code is pretty much just going to run in it. If it doesn't for some reason these are solvable problems. Your partnership with Pivotal, those guys have seen how this stuff breaks a thousand times. If you're focused on your code and you're worried about your code you're thinking about the wrong things in order to be successful. It's what is going to stop you from being successful is the completeness of your entire ecosystem. It's all of the components that you need to have in place next to the Foundries in order to take advantage of the stuff that's in the Foundry. And it's also the human beings that make up that ecosystem that are going to get in your way especially if you're an early adopter as you get in earlier than everybody else like the infosex of the world and they don't understand what you're trying to accomplish they're going to pump the brakes hard. So the faster you can get them engaged the better off you're going to be. But don't worry about the Foundry. Don't worry about the code. Your architects will take care of that. The code will run and always your developers will solve the problems. Think about all the stuff around it. And then there's just a few kind of gotchas that come up. So how cloud native are you really? So my portfolio has everything from VB6 in it to an orchestrated set of microservices running on Tomcat that are Spring Boot. So we took that Spring Boot set of apps in first. But I didn't have the database right. And I learned this very quickly. Do I have one big database for service? Do you have lots of teeny tiny little databases one per one? And what you really have to think about here is where are logical domain boundaries as you design your things and put stuff back together around logical domain boundaries? Apps are not the same thing as microservices. And you'll kind of learn that. And think about your sizing and where you draw your boundaries. Do you have file system reliances? If you do, the file system doesn't exist in the PCF world. Are you producing files? Are you reading config files? Things like that. Like you've got to figure out what your answer is going to be around that. Are you producing logs that handle both APM and kind of debugging? Like if you're on Splunk you've probably got dashboards that alert and all that kind of stuff. And when you move into PCF, unless you keep your Splunk right, you start going with something. You have to separate APM from debugging logging. And that takes some effort and some thought and some coding. Can you stream your logs? How fast can you get them out? Okay, you've got an Hadoop store. But can you get them there real-time fast enough to debug your applications? If you can't get the logs where you can access them fast enough, then your logs are useless to you. APM does not equal logging. That's the Splunk problem. Think about boundary monitoring with your APM. It's very easy to snap that on, but it doesn't tell you a whole lot about what's actually going on in there. You've got to write coding hooks. So to take advantage of all your APM, you've got to code some. Can you alert out of your new APM stuff? Can you send a page? Can you send a text message around alert? Can you hit a service test thing? Or is there connectivity? It's all these ecosystem things that go away once you move into the space. Can you deploy everything with CI CD? What monitoring... What monitoring is ops put in place that you don't know about? What big brother alerts did they stick on that Linux box? What kind of logging things? Because when you move all that stuff away, your ops guys had something that they were relying on that's critically gone now and you've missed it. So you've got to have that conversation. Smaller is better, but not too small. There is overhead associated with an app. There is a RAM footprint. There is administrative overhead. There's deployment overhead. We found that we had all these microservices that were really crowd and cloud native and was like, we need 4,500 apps. And Terry says, no, you can't have that many. You can have 10. And so we had this logical bundling back together of what we thought were really great microservices. Microservices aren't apps. You got to put things back together. Load tests. Think about your load test before you start. Your founders are likely to be in different locations than what they were when you were running code, which means you're separated from your databases. It's going to impact your load test. Make sure your network holds up. All of those things. Think about your load testing early because your deployment footprint is going to look different. Make sure you're not going to kill any backing services. Make sure you've got mocks ready to go. Think a lot about geo redundancy if you're going to do that. Are you going to run active active? Are you going to run active passive? Do you have all this stuff? How are you going to handle replication on your data layer if you decide to run active active? How are you going to fail over your database? Are you going to fail your apps from one data center to the other without failing your database? Or are you going to try to fail a database over automatically? Make sure you're thinking about that stuff from a geo redundancy. Auto scaling. It's a great feature in the system, but it's not magic. You have to do something to make it work. One of the first things we did in our app dojo was write another app that handled scaling for all of the apps in the foundry. Each one of our services is fronted by a rabidmq. Watch the depth of that. Spawn more instances and tear them down. It's not a magic auto scaling button. You have to invest in how you're going to do that. Make sure if you scale up, you don't overrun your database connections. All those kind of fun things. Auto scaling introduces a bunch of problems you haven't had to deal with before. Blue green deployments. Are you ready? The big thing here is, are the human beings ready? Will your clients understand the fact that when you do a release, it's a blended mix release now? These things tend to blow people's minds who aren't in this world. You've got to manage to the humans. CICD, all the things. It doesn't matter if you can push your code if the firewall isn't open. If you have to spend two weeks requesting a firewall thing before you push the code, you don't have CD. Everything has to plug into your CICD pipeline, or you will stop. You can't do CD without it. Can you kill or make happy all the human gatekeepers? InfoSec, business reviews, change review boards, all of those things are going to be in the way of your deployments. They all have to get folded into your model. Build a culture that pays attention to health check things like SonarCube and X-Ray. You're going to go fast. You need these things to ensure quality, and they will expose problems in your CICD stack that you don't know are there. You'll pull in security vulnerabilities into your CICD tool that will make everybody vulnerable. Watch those things. Don't accept that they're not DevOps. The last thing I've got here is DevOps is not just DevOps. It is Dev, security, database, networking, business. All of those people who could potentially be in your way of your deployment have to get engaged in the process, or they will slow you down and stop you from taking advantage of the things that are in the foundry. The ability to properly correspond with other teams, the other people who develop and operate our software, that stuff started to accumulate for the teams that were either not necessarily willing to replanform or just couldn't confront the problems. Also, another thing that started to happen was that zero sum game. It's always going to happen between somebody wanting a new feature, X quarters, and sort of the old somebody in sales just made versus this thing that you know what you need to do. That's going to be the same problem here that you face everywhere else. To go to the final observations, these two are closely related. I understand that that zero sum dynamic is not going to change, and very likely your first replanforming effort may feel like you're doing some sort of Scott Quarks. It might be a carve out, but very soon you're going to have to have a deal back with your product teams or your sales teams or whoever's driving that bus for new features in your roadmap, and you're going to have to convince them of the value. How are you going to be able to transform your app so that you can get to those more rapid feature releases basically? If you can do that conversation well, you're going to be of your executive sponsor, generally a CTO of somebody like that in your organization. You need to equip that person to be able to commit to your project right when he or she made that deal, to recommit to it again in six or nine months when things go weird because they will go weird and you have to have that correspondence with other teams, and if you're in the middle like we are, that's your job. You got to have those conversations. It's not going to happen from the executive level. You just need to