 All right, I think we're going to go ahead and get started here. So thanks everybody for attending our talk. I'm James Masters. I've been with Kroger for about 13 years. Eight of those in an ops platform role, and five of those in security. And I'm Ted. I'm going to talk. Thank you. I didn't expect to laugh on that, actually. So I've been doing software development for 20 years. I've been doing a lot of cloud stuff for the last four years. And I've been with Kroger for a little over five years. So who is Kroger? It's a very large grocery store chain, basically. We have jewelry business and other things as well. But we're coming up on 400,000 associates. And I thought another interesting fact was over 200 million meals donated by Kroger. And you may say, well, how have I never heard of Kroger? We go by a lot of different banners. So here's a whole bunch of them. In California, you might know Kroger as Ralph's. So I'm going to take a couple minutes and kind of lay out the background of our journey to Cloud Foundry. I think that the place that makes sense to kind of draw that line in the beginning is just our virtualization initiative. So it's not as exciting as it used to be. I mean, it's just something we do now. We have a virtualization first strategy. It's just the de facto way that we do things. But I think our scale is worth noting. We have over 40,000 VMs in our environment. We're upwards of 90% virtualized, so pretty large environment. We brought in Lab Manager in around 2010. And that was kind of the first foray that we took into exposing some of that infrastructure via a self-service portal so that our developers and business partners and things could go in and request infrastructure themselves and get a VM and then customize it and things. That was great. That was well received by our company. But that had challenges with it, right? I mean, as a dev, you get to go in and still kind of configure all your middleware and configure, load your database, load the right version of whatever Java you're using. I think we kind of upped our game with vCloud Director and vCenter Orchestrator, which we rolled out in 2013. And there we started trying to steer people away from creating special flowers, right? Instead of customizing something and getting it ready and then holding on to it forever, we tried to encourage teams to help us write some code in Orchestrator to get your VM to the point that you needed it to be. And then put that in source control as well so that it could be managed and maintained by more people than just a couple of people involved in doing it initially. And that really took off. And then people started saying, OK, we see what you're doing in test dev. That's great. We want to start doing that in production, right? And we realized quickly that kind of kicking that ball down the road as it was, we would be able to deploy things quickly. But the hard part is not really deploying systems, right? The hard part is managing them going forward, plumbing those up to the rest of your infrastructure, apparently by virus definition drop today, and then deprovisioning those systems, right? So getting them on lines with one thing, but managing that life cycle is another. That's kind of the point where Ted and I, so Dev and Ops, kind of started working more together in our teams as well. Ted said, you're doing all this orchestration. You have this capability to expose infrastructure this way. He brought to the table, he said, you know, we've been writing automation code as well. So what happens if we marry the two, feed in a few parameters into an orchestrator process, and out shoots a running environment with an actual application deployed. So kind of our first attempt at a CF push, if you will, internally developed. We kind of presented that to our business partners and everybody, and everybody's excited about that, right? Ted's boss, director of architecture, kind of said that's awesome what you guys have done. Now, stop. And we're like, well, this is great. What are you talking about? He's like, there are other people working on this problem. There are other people contributing to doing this. We want you solving other problems and doing other things. So that's kind of how we got into Cloud Foundry in about 2014. Our experience with that has been very positive, and it's allowed us to focus on other problems within our company, so not only orchestrating the provisioning of systems and middleware and databases and things like that, all that's still important. But to focus on even orchestrating an entire project initiation, orchestrate source control repo creation and things like that. And that's kind of what Ted's going to get into here in a minute. We call it our Kroger internal cloud initiative. And so Ted's going to talk about that here for a minute. All right. So a little context on that. Why did we do this from a business perspective? We really wanted to consolidate platforms. Kroger's over 125 years old. We've collected a few different system setups over time. So we're looking for a way to standardize that. We also need elastic scaling. Our business has announced that they intend to grow by two and a half times the current size. So from 100 billion to 250 billion, that's a lot of scale. And at present, we don't scale very gracefully. So we were looking for that. And of course, from the DevOps perspective, we really wanted to adopt a lot of automation and quality in our processes. So what were some of the requirements for that? We wanted to make sure we had a system that would support 12-factor as a way to write cloud native applications. And we also wanted to embrace the infrastructures code mantra. Hopefully these things sound very familiar to you. We wanted to scale horizontally. So specifically in scaling, we had found that vertical scaling tops out, and that's awkward. So horizontal scaling is a much better approach. And we also had the requirement that we need to be able to run internally, but we also want to be able to move to a public cloud or run a hybrid of the two. All right, so this is now what I'll call the novel approach I think that we're taking. We've said it's great that we can provision out environments with Cloud Foundry very rapidly, but what does it take to get a new project off the ground? When we say, okay, we've green-lighted a brand new project, what does a team need? Well, they need some code to start with, right? What's their base setup of code? They need ALM tooling. We really want them to use things like source code control, hopefully there's agreement on that one. And issue tracking and continuous integration and all that good stuff. And then they also really need environments. We use an agile methodology where we demo every two weeks basically to our business. So in order to hit that first two week mark, we really need environments. And in the past, it took significantly longer to get environments, but we wanted to make all this very, very easy for our teams to have and get going with. So we borrowed or forked the initializer project from Spring. How many people have seen initializer or start.spring.io? Okay, I'd encourage you to go check it out. You can just go to start.spring.io. And basically what it does is you type in a few parameters and you get code out of it, right? It gives you a base set of code. And it's really clever the way it's configured. It's a great start to the project. So with that, it required very little work on our part, besides just forking their code to get the code to spit out that we needed. We have modified that a bit to make it Kroger friendly. So we know we have certain logging requirements and things like that, standard things across all our projects that we want teams to have. Then next up, we started doing enhancements to the initializer to be able to script out the creation of source code control. So when the script runs, it creates a stash repo. We use stash for our Git repository. And it goes ahead and checks in the code that was just created into the stash repo. It creates our continuous integration builds. And it sets them up with the best practices right from the start. We know we want our teams to do continuous builds. We want them to have sonar builds to check code quality, all that stuff. We want them to continuously deploy out to the dev environment and then have a progression where it goes from dev to test the stage to prod. We found that going straight to prod, not a best practice. So if we want them to do all these things, it makes it much easier for them if we just put that all out there from the start. So we set up Team City that way, and then JIRA for issue tracking, and a couple other things. And then we go ahead and we make a call to Cloud Foundry and say, OK, for a given blueprint, give us a database, so MySQL, give us a messaging, rabbit. And then go ahead and take that new artifact that was built on Team City and push it out to Cloud Foundry and have it running out there. And so the idea is that someone said to the project team, hey, we've got this new project we want you to start. And 10 minutes later, they've got code, ALM tooling, and their environments, and they're ready to go. So what does that actually look like? So here you can see when we forked Spring Initializer, we did some pretty fancy coding. We put Kroger in front of Spring Initializer up there. Pretty proud of that. So I did cut out some of the fields that we have up there just to fit it on the slide. But you can see the team just basically fills in their project name. And then on the right side, that's where we've added in the, hey, what middleware do you need? Do you need messaging for this solution or database? They just check boxes. And then which ALM tools do you need? And they check off those. And then when they click the Generate Project button, it goes, it runs, and does all those things that I was talking about. And then they get their links to everything, right? So it says, hey, here's where your source code's at to check out, where your continuous integration builds are set up, issue tracking, and even your link to Cloud Foundry of, hey, where can I go find my brand new apps that are out there? So, and then what does that look like on Cloud Foundry? This might look familiar. It's just, hey, you've got your project out there running in a SQL database in the rabbit. And because of the magic Cloud Foundry, it's already got the security credentials provisioned, injected into the application, and it's all ready to run. So what do we have to do to get to this point? One of the things was defining reference architectures. If you can spin all this stuff up, but nobody's going to support it. Oh, wait, I'm jumping ahead. So reference architectures, we said, let's start with just a basic one, app server, database, messaging. That's kind of a typical one that we use. But we have plans to create additional blueprints or reference architectures to handle scale out, right? So if we give the team Rabbit and Cassandra, and then the ability to push out to Cloud Foundry, now suddenly we have a very scalable app that was automatically provisioned for them. Next up, we had to get operations buy-in. So if we spin up all this great stuff and nobody's willing to support it, it's going to be a very short flight. So we also needed to get our developers to embrace 12-factor coding principles. This is a general Cloud Foundry thing, right? If they are going to write and assume that they can put things on the file system and they're going to stay there, they're going to have problems, right? So that's the journey we're providing education to our developers to get there. Centralized logging, this one's also really key. We were excited to see LoggerGator in a way to get logs out of Cloud Foundry. But I pretty quickly figured out that if someone calls and said they had a problem an hour ago and I go to LoggerGator, I can tail the logs or I can see the last 100 messages. But an hour ago is long gone, right? So centralized logging was a really key thing to put in place, to have a place to dump all those logs. And then an authentication pattern. So we had standard security practices for user, authorization to applications. Cloud Foundry is a new challenge. You don't know your IP addresses of the servers that are out there. So we had to work through kind of how to move to a token based model to deal with that. So what's our experience? We actually had some of these ideas earlier on. So when James was talking about our VCO excursion, we knew we wanted to do some of this stuff. VCO was a really great tool to achieve some of that. But then we found out pretty quickly that, it turns out, it's complex to build Cloud Foundry. We started on this journey. As soon as we brought Cloud Foundry in-house, suddenly it became much, much easier. Like the provisioning script, I don't know how many lines of code was that? Like 18, yeah. Yeah, 18 lines of code to get a database, messaging middleware, and an app server. We knew it would be simpler, but it was still kind of amazing to actually see that. And then we've also found that in our Cloud Foundry experience that the CLI has been a great help. We were able to take that and hook that into our continuous integration server very easily, and that gave us the ability to do blue-green deployments. It did take a little bit of work to get to that point. But once we got there, our business was loving it. Suddenly we could push updates to stuff, and they didn't even see the updates happening. It was zero downtime deploy. So they really liked that. And in some of our situations, that's really critical, where we can't really take the systems offline. And sometimes they would wait a long, long time to get an enhancement, months and months, before they'd actually say, yes, we're willing to take the enhancement. So we can see that in the future with having a automated process to provision out the team and having a standard structure to their whole deployments and all that, that we intend to give teams blue-green deployment capability right from the start with the provisioning. Yeah, and we've gone into this with one of our most business-critical applications, too, as a POC. So we're kind of starting with the lowest common denominator, making sure this is a robust and available platform. Yeah, that's the group that's particularly pleased with the zero downtime deploy. So another lesson that we've learned in doing this is that the delete all script is really an important component. So I will say, given that it's a very critical system, I was a little nervous the first time I hit that delete all script. Even though first we ran it in lower environments, to think of the environments as transient, they're still kind of like pets to me. I know I'm not supposed to keep them as pets. But that first time running the delete all was a little nerve wracking. But that taught me the first good lesson of the delete all script is that I thought we had everything infrastructure as code while I was wrong. So we tried to bring up the environment again, and I was missing pieces. So that script taught me a very valuable lesson. And then as it turns out, the problem we were having was a problem of migrating from one version to the next. Is there any value to the business in solving that problem of just getting to the new version? No. So destroy it all with the delete all. Bring up the nice, clean infrastructure as code script, your environments provisioned, and you saved all that time. And I think currently to do five microservices, it's taking like four minutes to provision everything out for us. So it's not even worth thinking about, could it be an issue with this? So and then also, if people know that you're packing that delete all script, they make sure that everything goes into the infrastructure as code. Because otherwise it's gone on the next run of it. Yeah, and some of our VCO, VCD stuff I talked about before, while not a lot of it went all the way to production. I think it did a lot to at least start getting that mindset into people's mind that don't create, again, special pets that you're going to care for over years in our private cloud. Yeah, and so part of the delete all script in the blue-green process to get there, we found that we had to script out our creation of user-provided services as well. So we really tried to capture everything in source code control and script it all, which was a little bit of extra work at the front side. But boy, it saves a lot of time on the back side. All right, so what are the benefits that we're seeing? Well, we have this philosophy of make it easy to do the right thing. And so with this whole process of, hey, I start up a project and it's got all the pieces that I need to run my project, it becomes almost like an executable SDLC or a software development life cycle. And so it puts the teams on the right path from the very start, and we found that in this process, teams tend to follow that process when it's so easy to do, why go make trouble, right? They just, they go use it and they focus on the business functionality, which is really where they should put their focus. And then we've seen the additional benefits where if we know that we want to push them in the direction of doing automated functional testing that becomes so critical with testing out your newly provisioned environments, that if we provision them out, their code and all their stuff, and we give them a functional test harness right from the start, and then we've got the builds on the continuous integration server that are running the functional tests against an environment, then it's a lot more likely that they adopt functional testing, right? So, and I fully expect to that we'll realize more benefits as we go with this, because we had done something similar on the code side long ago, it was very similar to Spring Boot in that it was an opinionated framework stack that we gave our developers, and we found that we were using that for a while and then we had a new employee came in and he said, hey, how come you guys aren't running Sonar, right? And we're like, no good reason. And so in the span of two weeks, he had migrated all of our projects so that they were now reporting their results and we were getting trending on code coverage in Sonar. And that was all due to the fact that we had a very common setup across our projects, and since we are now going to have a common setup across our code, ALM, and environments, I'm expecting that there are benefits we haven't even seen yet that will come across. So, continuous delivery, so that's another one that we're working on. We're not really there yet for the most part, or maybe some small, small pockets have achieved it so far. But we can see that we can all work on that problem together. When we go to solve how deployments go, one team can pick that up from another team because we're using common way to approach it. And then infrastructure as code, when our process runs the initializer plus process, we're taking it and not only provisioning out the environments, but we capture the scripting that was used to do that. And we put that into the team source code control. So if the team needs a UAT environment, how many people run into that problem where you're getting close to the release date and now suddenly you need environments for UAT and performance testing and security audits and all that stuff, and suddenly there's environment contention. Well, if we've got this just as a script, you can spin up the environments you need for that temporary time and then tear them down after you're done. So it really undoes that bottleneck and it gets people into the mantra of infrastructure as code. All right, so benefits for managers, much, much faster project provisioning. So I think one of the key things here is that we can start to try out concepts. When it takes months to stand up hardware and get it going, you limit down the number of concepts that you're going to try out because of that. If you can spin this up in 10 minutes and let an agile team go at it for a couple sprints, then it's much, much easier to give that a shot and try out some ideas. And then when it doesn't pan out, it's also easier to reclaim those resources and put them back in the pool and keep going. Higher consistency. So when we get started with projects, it's much more known how much time it's going to take to get going. And then also, since there's consistency across the projects, I can move developers around. And they're very comfortable with the environment because it's consistent. And it's our path to consolidate platforms. If we've been growing all these special snowflake platforms like crazy, how do we get to a more common platform? Well, if we've got just a dropdown list from our initializer that says, hey, do you need just a small-scale web app, or do you need a scale-out web app, or just a couple blueprints, basically, then we're going to see much less proliferation of environments. Or at least, that's my hope, we're at the start of this journey. So we'll see how that plays out. And auditable. You can see, basically, who's starting up projects, who's provisioning out resources and all that stuff. So it gives you a little more control on the projects. How are we doing on time? Good deal. Seven minutes? OK, great. So questions then? Oh, and thank you for listening. Yeah, thanks. Well, I'm a dev. I like to pretend that's all free. We don't really handle that well right now. We have a little bit of a showback model, so at least we can give people an idea of what things cost on a per-RAM utilization basis. But more than that, I mean, within vCloud, for example, we know vApp owners. So we send out a report every period to them to say, hey, you're consuming X, Y, and Z. And we actually just started adding, if we were to actually charge you for that, here's how much it would be. But we're not actually following through on that yet. Isn't he great? That's how you do DevOps, right? The ops give it away for free, and Dev's happy, right? Yeah, we're still, yeah. So it was about 2007 that we started doing a Maven Spring-based standard architecture. And it was the Wild West back then. And I think people really liked the fact that they could start to focus on business functionality. So to get the developers to adopt that, there wasn't actually much resistance on that side. And now, since they've gone on that journey, and it's been a good journey, we're finding that adoption of this, we're actually kind of having to hold people back at this point. The devs are very keen to get this, and the project managers and all that. And we've made them so keen, because they've been beaten down so regularly over the years that, yeah. Well, and actually, so I shouldn't say it's all grins, right? From an organizational change management standpoint, this is a much bigger shift for operations. Now, we've seen some of our friends in operations, Arnie get at that. They move away from mundane work, and they start doing more engineering work at the script level and the configuration environment. So that side of the ops house is actually excited about it, too. But there's another side that's a little cautious with respect to so much change. Was there another question out there? Yeah, currently, that is a pure VMware stack underneath. So vSphere, VCD, VCO, and then actually on the front end, a simple portal for some of that stuff is VMware Service Manager. So we are looking at, like, VIO, we're looking at OpenStack as well. We have played around with provisioning to other hypervisors as part of that, to see how that works. But what you saw today is all vSphere based, at least from the infrastructure side, yeah. So basically, it's just using the CFCLI to script that out. So we know, on a given app project, which microservices we deploy out there. And so a lot of it is just like CF, delete app. Use the minus R option, otherwise you can leave routes out there, which can create a bit of a mess. And then, as I mentioned, we have some user provided services and things like that. So we clean those up as well. But that's really all there is to it. So the delete script is actually very small in creating it. I think it's the mentality that it brings, is the real magic. And maybe not specifically to that question, but on some of those blueprint deployments that we talked about earlier through VCO and VCloud, those we right-alligate, those things blow up in seven days. So we just communicate that people using it, hey, make sure it's in source control or whatever got that thing to that state, because we're not holding on to it forever. Yeah. Yeah, that was a great one that James had suggested. When we got people in the mindset of its transient right from the start, then I can imagine later on it would have been tougher to introduce. But once you get used to it, then it's fine. So I've upgraded Ops Manager and Elastic Runtime twice in place. We were just talking about this earlier, walking around outside. When we take this beyond, we take this to a more production rollout, and we're still in POC phase, I mean, to be honest with you, we'll have at least two foundations running, and we want to get people in the mindset of keeping both foundations, Ops deployed to both foundations synchronously. So not only are they replicating them, but just from our CI CD process, making sure they're consistent. So my thought is that we could almost take foundations in maintenance mode, or kind of make that blue-green deployment process, maybe cross data center, such that it's a non-event. Now we don't have that in place today, that's all just whiteboard and thinking about it, but that's my thought on that. So we've had, well, obviously we're doing Cloud Foundry, we're doing a little bit with its containers. As far as Docker on its own, we have little teams looking at it, myself included. I think we're, at least I am, I think we agree on this, most interested in seeing how Docker will play into Cloud Foundry, and how we can use Cloud Foundry to orchestrate Docker containers. So we got a couple of guys looking at Kubernetes and just, again, roll your own type things around Docker. But again, I think some of the things we've learned up to where we got today, and that that is, again, provisioning containers is like, I was at one of the presentations here earlier, I think it was that Dr. Nick guy, and he was like, well, congratulations, you've provisioned a container. And that's like the easy part, right? So, I mean, I think that's the extent to where we're looking at Docker right now. I think the application, authentication, authorization model for apps in Cloud Foundry, container isolation, within data classification of apps being pushed to CF. I mean, we're a highly regulated company. So we have every PCI, DEA, HIPAA, right, and we have that heavily segmented today at all layers. So for me, and then I started in security as well, so my friends over there in corporate information security, you're like, just because you have this big VLAN now that you have your Cloud Foundry running in, doesn't mean that you can all of a sudden start just pushing all applications of all kinds of classifications in there. And then on the target systems that are firewall, just open that VLAN up into that zone, right? So those are kind of my top concerns, so just data classification, segmentation, and the app authentication model in CF. Yeah, those are some very real challenges. The security teams also like some aspects of this. As soon as I told them that I wasn't going to log into a box anymore, they were pretty excited about that. And then also having the blueprints, right? That's something where we can say, OK, here's the application stack. It can be vetted very carefully. They're excited about kind of that consistency as well. But what James said is very, very real. Yeah, we're hoping some of the micro segmentation, whether it's NSAC, we just kind of went through an evaluation of some of those technologies. I think that's where they're headed in supporting micro segmentation at kind of a container level. So we're very interested in seeing where that goes. Nope, we're done. All right, thanks. Thanks guys.