 Thanks for joining. Let me start with a quick introduction. So I'm Jeroen van Rotterdam, as they say in Dutch, or you call me J, if you try to pronounce it. And I'm actually the CTO for the EMC Enterprise Content Division. And Mike is here as well. Go ahead, Mike. And I am architect on the CloudStats platform that we're building for content applications. All right. So what we want to explain today is how we're using Cloud Foundry and how we're using it in production and sort of the struggles we had deploying it and why we actually picked that in the first place, what the business benefits were. So if you look at our environment, so several years ago, we started to build a new content platform. And so the Enterprise Content Division is primarily focused on content management type of applications, rich content apps, as well as process-centric apps. And apps that are sort of on the intersection between content and process. And it's a pretty sizable division by the way. So it's around $700 million in revenue to 1,500 people. So that's sort of the order of magnitude. And can you go back? All right. So what we wanted to do is basically build a new platform for our customers and our partners to build application upon. And as well as we are in the business of building applications or solutions on top of our own platform as well. And so we were shooting to build a base platform that is sort of handling this matrix where you can have multiple applications running in the same environment for multiple tenants. So we built a multi-tenant platform from the ground up. And there are many challenges if you go multi-tenant, but there are many promises if you do that as well. Obviously, the cost of ownership is one of them. And so we wanted to build a single environment for multiple apps and multiple tenants, managed by Cloud Founding. We started to adopt Cloud Founding, Mike, about two and a half years ago. So two and a half years ago, it was still with VMware before it moved to Pivotal before it moved to Open Source. So we were extremely early in making that commitment. And we said, well, there's a good promise, which you will explain today. And let's go for that. But if you look at it, behind the scenes, you've got this single environment running multiple apps and multiple tenants. But they actually share a lot of sort of content-centric and process-centric services or microservices as they are called these days. And so these are services around, for instance, transforming content from one format to a different format or things like the process engine to run business processes. We've got an engine for case management. We do secure search or secure full-text and metadata search. Things like storing basic metadata, modeling of metadata, storing content files in a scale-out way. And each of these underlying microservices need to be scaled independently, because some are more CPU or memory-intensive than others. And all of these apps, they sort of reuse the underlying common application platform as a service, as we call it, with these common services. And the reason why we started to build this multi-app multi-tenant platform is that in our business, there's a trend towards smaller applications with a smaller lifecycle. So the big monolithic apps are sort of are gone. And specifically mobile, you see a need for more targeted apps that solve one specific business problem really, really well with excellent user experience, rather than have a big app that tries to address many different use cases. And so if you think about that model, then there's sort of an interesting direction that the apps get smaller, and it takes much smaller to build an app like that. But the lifecycle of that app is much smaller as well. And so in fact, the underlying data in that platform and the content and metadata will outlive the lifespan of that small little app. And so you actually need to think about how to decouple your data model from the applications. And you need to have a very fast-paced model to build these apps and deploy them, as well as manage now many of those different apps on the same shared environment. So there's absolutely a need for managing multiple applications in an environment over the same shared data and content. But there's also a need to do multi-tenancy. And in fact, we're in the enterprise segment, but we're moving down market. And if you go down market, then the marginal cost to onboard a new customer needs to be as small as you can, can get it. And so we were really focused on bringing the TCO down for adding new tenants. And at the same time, for our on-premise customers, there are a lot of enterprise customers that actually have a multi-tenant problem. So typically there are divisions or there are country organizations that have a level of isolation. So even in the enterprise segment, you see a need for multi-tenancy within one company. All right, let's move on. All right, so another thing we did is so we built this multi-tenant public cloud platform as well as solutions on top of that. And one thing that the customers really like is that they can sort of personalize their URI and their as well as the behavior in the application. And so what we're running is we're running these shared services in all of these components, these microservices in our application platform as a service on top of our persistence model. And so we've got three main persistence models using Cassandra as the key value store. We're using our own database XDB for metadata. It's a metadata repository, scale out repository as well as all of these buckets of shared service that we need to scale individually and on top of that, our application. And the first application we built is the so-called supplier exchange app. And each of these customers will have their dedicated URI to hook into this platform. And we need to manage sort of the versioning of the underlying applications as well as the dependency between the version of a specific application and the underlying shared services as well as the persistence layer. And specifically if you're upgrading your cloud, which we do for all tenants, then you sort of migrate tenants towards the new bits from the application level down. So this dependency between, hey, the router configuration as well as the shared services and the application bits and the version of those are something that we're managing using Cloud Foundry. And we didn't want to set all those up by hand. We didn't want to set all these up by hand. We wanted some tool to help us do that. Well, to do this by hand will be a disaster, right? So this is the last thing you wanted to. And so we sort of, this is our real production setup. So we, like I said, we started two and a half years ago and it was really early. And we filled this brand new application services and we ported a bunch of assets we had from our on-premise platform, made it multi-tenant as well as building a new app. And so it took a long time to actually build sort of the foundation first on the radar and then we sort of made it a full project within the division. And then sort of early 2014, we really started to focus on the deployment of the environment. And so this is the real life situation. So current in production, we went in production on August 1st, 2014, where we're running as much as we can on the non-persistent services in Wardens. And so there are actually three applications. So we got the sort of end user facing supplier exchange, but we created two other applications. One is a tenant management console. So the super user from the tenant in a multi-tenant platform can manage his own environment, invite users, have sort of this viral effect on within the organization or beyond this organization. And then we created a platform management console, which is an environment for us to manage the tenants. So for instance, if a tenant is refusing to pay, then we can pause it, right? So as an example. Or that's where we actually provision new tenants if they give us a purchase order for that. And then you got these Warden containers with the microservices, like the process engine and metadata server and transformation services as well as analytics capabilities. So we've got this really nice analytics engine. That sort of is a configuration attached to the data model of an app. And you can think of it as a tap, right? So we configure that and either more or less data, more or less data comes into our analytics platform, which is based on Hock and Pivot to HD. And then there are a bunch of Cloud Foundry services. So we built our own brokers or services later for XDB and we had to build one for Cassandra and we're using Rabbit as the MQ. And then there are a few boss managed virtual machines, like for instance, the Swift Store. This is the interface to store files, as well as our full text engine that we ported over into this environment. And a multi-tenant federated authentication module, which was not easy to build. And a few sort of outliers. So we got a bunch of VMs for, we're actually moving Hock and Pivot to HD as a towards a services model. That was just a temporary thing. And unfortunately we've got one Windows VM with AD sync type of setup. So all the rest is just Linux, right? So after we decided that we wanted all this Bosch and Cloud Foundry stuff, then we took it to the security guys and they said, well, how can we set up this security? Normally we like to have firewalls around our applications and around our services. How are we gonna do that? So it took them a little bit of convincing, a little bit of talking to say the CF router is one VM and we can put that in its own network. And the Bosch stuff can be in its own network, Bosch director, all the Bosch VMs. All the DAs can be in their own network because they're gonna run the applications themselves and all the CF stuff, Cloud Controller, UAA, all those things can run in its own network. So we've actually partitioned our production environment into these six networks with the CF services, all the persistent stores are in its own network. It took, Bosch was working fine. It took a little, just the firewalls and things like that was a lot of work, but Bosch by itself just deployed everything in the right networks, fine, we were really happy with that. Hey, Mike, what was the decision to make these, to partition a network, right? I can see that you wanna isolate vendors, right? So you wanna keep that apart and you don't trust that, but what are the other key decisions here? So the key decisions were that the CF router was the endpoint where everything's coming in from the internet's coming to the CF router, so we wanted that isolated. If it was hacked or broken in too, we didn't want that. So we're gonna be facing. Yeah, publicly facing. And then the DAs were where the applications were running, so the guys are gonna rest calls and things are gonna get all the way into the DEA, so we wanted to isolate that as well. And then the Bosch really was seen as the more secure, that's the one that can talk to the vCenter, that's gonna create VMs, delete VMs. We wanted Bosch to be isolated also. And that's kind of where we fell into this, like the persistent stores in its own network, router, public facing in its own network, DAs running the applications in their own network, and Bosch has all the secrets and passwords. We want someone breaking in and deleting our VMs and things like that. So that was kind of our thinking around that. This is not uncommon in a real enterprise setting as well. Not uncommon, right? Yeah, yeah. Although, if you look at all the Spiff manifests and Cloud Foundry manifests, they don't break it up like this, right? So we had to do a little bit of work to add the extra networks to lay it all in there. But once you add the extra networks and lay it all out, it deploys quite nicely. So after talking to the security guys, then you talk to the operations guys and the engineers and developers, and then they kept asking, like, why are we using this Cloud Foundry thing? Why are we using this Bosch thing? So it came up with the short list of why we chose to do this. One is we wanted to standardize on Bosch. So we talked about lowering the total cost of ownership. And this is, the total cost of ownership is not just building the application, but it's also the operational side. So having everything deployed with Bosch is a big time saver on our point. We can deploy all our environments. One command, we have several Bosch manifests, but Bosch Deploy sets up the data center exactly the way you want to. You don't have to worry about the operational people typing the wrong command or misconfiguring things. We like Cloud Foundry for having the scalability and then also the tenant URL. So we talked about acme.emcond.com or foobar.emcond.com. One of the requirements was that the customer needed to be able to pick their own URL. So we have our application. They can type in what URL they want. We can check with Cloud Foundry to see if it's in use. And if it's not in use, we'll just deploy it, talk with Cloud Foundry through the APIs, give them the new URL, and now they can go to that new URL and log in and be on their way. And then on top of that, we were able to use the Cloud Foundry APIs to build our own upgrade tool to do blue-green deployments so we don't have any downtime of any of our services or any of our applications. And I'll talk through that a little bit more later. And this relying on Bosch really exemplified it and really hit home with everyone when we hit the shell shock event that happened last year, right? Big security event. The OS was compromised. We needed to upgrade the OS. And for us, there was like no worry, right? Cause this was built into our plan. Our plan was get the new stem cell, run Bosch deploy, we're all done. So it took a little while. We waited for Pivotal to update the stem cell patch, patch the OS, got the new stem cell and fairly quickly over the course of a couple of days running through all the environments, doing the proper testing, we had everything up and running. So upgrade the stem cell, run Bosch deploy, go drink beer. So let's talk a little bit about that. So these are almost 500 virtual machines that were updated automatically, right? Yep. Now, can you explain? So there are 16 Cloud Foundry environments. I think that's good to understand that we have quite a slew of test environments on Cloud Foundry as well as a pre-prod, integration pre-prod and production environment, right? And do you wanna elaborate on that? Yes, so the beauty is that once we did it once, right? So we did it in the first Cloud Foundry environment, our C environment. The CI tests were running automatically an hour later. We had a green build that said, everything's running good on this new C environment. At that point, we were highly confident that all the other environments gonna work, right? So we took this manifest, we took the stem cell, we said, okay, give it to the next guy down the line, run Bosch deploy. That worked also, okay? Next line, down line, run Bosch deploy. Just down line, and there was, once it was working in one, we knew that it was gonna work everywhere. So it gives us that high degree of confidence using Bosch that we can run Bosch deploy in one, we can hand off the manifest down to the next guy that needs to update their environment, and it just upgrades. So I think the level of automation is super important, right? So in this environment, we've got applications with end user user interfaces, as well as microservices and persistence, and with REST services in between, right? So we test everything in an automated fashion, right? So we got a full test automation suite, and the new UI technology, which is bootstrap and AngularJS, right? Makes it so much easier to build, automate a test suite with decent code coverage measurement, et cetera, right? So can you tell me a little bit about the Cloud Foundry API, why did you guys go the API route? Yeah, so the CFC by itself would allow us to do blue-green deployments. If you're played around with a little bit, you can push a new version of the app, you can map your routes and move your routes and add new routes and delete routes, but that would be a bunch of manual steps, right? We didn't wanna hand this off to an operational person question. So all of our, the multi-tenancy is built through the stack. So when you go to the URL, the URL, what is multi-tenancy? So for us, what we mean by multi-tenancy is that so we can have multiple customers of supplier exchange all using the same supplier application running in Cloud Foundry. And so we differentiate the tenant based on the URL. So if they come into Acme, okay. Okay, that's fine, yeah. So one other thing about the house here around the Cloud Foundry API, right? So I think what's also important to understand is that if you do blue-green upgrades, right? And you push new bits and many upgrade scenarios. This is like from very simple, like you push just the app bits, right? Or new versions of the microservices or new database versions, right? Or et cetera. And sometimes they're more or less complex upgrade scenarios, maybe your data model changes, right? But in a lot of cases, when you push new logic into your production environment, you might have to tweak some configuration as well, right? So it's not just pushing the bits. The configuration of the existing customers need to change. I think that's what we do, right? So, yeah, so our upgrade tool actually does two things when it does the upgrade. It does the blue-green upgrade of the application bits, but then it'll also talk to the microservices themselves and reconfigure them as needed or update their data model or let them know about, you need to now talk to this new service, this new service available and new versions available. It'll update the version along the whole stack and have a slide on that. And then this is the stuff that we didn't have to build, right? By choosing Cloud Foundry, we didn't have to monitor the VMs. We didn't need a bunch of clustering to make sure everything was up and running. You do a Bosch deploy, Bosch makes sure all the Bosch VMs and the Bosch software is running. Do a Cloud Foundry push, Cloud Foundry makes sure the applications are up and running. Resource scaling. So we've had to kind of teach our performance team to think a little bit different about scaling. So before they would think, can I get 20,000 people all to use this server that we've just deployed all using it at the same time? It's like, well, if we're gonna scale out, we don't have to worry about 20,000 people all using the same app at the same time. We just have to know how many users can use this app. Is it 50 users? Is it 100 users? Is it 1,000 users? If it's 1,000 users, then we're just gonna scale out. If we need 20,000, we're just gonna deploy 20 of them. And it gets the performance team testing in a little bit different way. High availability is all taken care of with Cloud Foundry. Log collection, they all, log creator will send it all the place so we can collect them in one place. And then health metrics, we don't have to worry about is the system healthy or not? Cloud Foundry will tell us. So we really like Bosch. Bosch lays out the data center for us. We started out without Spiff. Before Spiff, it was a little hard because it was communicating, modified the manifest in this way. How do you patch a manifest? It was a little complicated. Now that Spiff's come out, we run Spiff 16 times in our build system. So Team City will run this Spiff command just every time, just oops, create a new Cloud Foundry configuration for us. And a developer can actually go to the Team City build and look and see what's in production. How many DAs are running? How many nodes are in Swift? What are the settings being used in production? They can just go look at our build and they can see what's being deployed in production. And this is key for us to be able to push things throughout the infrastructure. We have CI, which is one way, performance guys may be on a week old Bosch manifest. And if they come back and say I'm running this version, this build of the Bosch manifest, we can say, okay, well, go update. We think this has changed and everyone can stay on the same page. So this is kind of a more detail on how we do this upgrade with our upgrade tool. So you look on the top, you have our tenants T0 using application one, T0 using application four. We may have a stack of the supply exchange blue and Injust blue, one of our services, the main service that we use all running on Cloud Foundry. So we start up the green version of Injust and then start up an old, a new blue version of Injust so that we can move tenants one at a time over to the old version of the application running on the new services. And these are the microservices, right? Injust is the microservices. Yeah, yeah, yeah, yeah, it's an, internally. We forgot to update this slide. But so we have the old application with running on the new services, but we've only moved over two tenants. Now we can test out those tenants, make sure they're gonna work, make sure the new services don't fall, fall flat on the ground in production load. Then we can start up a new version of the application, move tenants over to that new application running on the new services and everybody's happy. And we can do all this without downtime because our application tool, our upgrade tool, we'll just do it one by one. We can list, upgrade all the tenants, we can list which tenants we wanna upgrade. So Mike, this is sort of the environment during an upgrade, during the move of tenants, right? So we wanna keep them all in the same stack, right? But you do this in a period of time because you have thousands and thousands of tenants, right? If you move them all at once and you got issues and then it hits a fan, try to avoid that. So we've set up one test tenant so we can move that one test tenant first and we can do some basic validation, a little bit of scale testing if we want and if that's all good, then we can run the command a second time and move everybody else over. So this is a bunch of the gaps that we had. Some of these are because we started way back in version one before everything went to version two. We first wrote our service brokers with the service gateways and then we made them service brokers. The CLI APIs are changing quite, CLI's changing. So even last week, on Wednesday, we found a production issue where there was a customer couldn't do something because of something that had been pushed the week before. And so on Thursday, we fixed it and on Friday, the developers wrote the script of what's cloud foundry commands to apply the patch because it was just the one jar that had a change. So there was some manual steps that we wanted to do just to get in as quick as possible. But the steps that they wrote, the cloud foundry steps that they had written were based on the version five CLI. So on Saturday, when we were doing the promotion, they got to the step, well, on Friday we found it because they were doing the pre-test in pre-prod. We were running the steps and the commands were actually not working because in production and in pre-prod, we had the version six CLI. So we had to go rework it. We found it before we put it into production but CLI's changing. A bunch of other stuff, we're running out of time. We've contributed a bunch of this stuff that isn't core to our services. So ClamAV, which is antivirus open source antivirus thing, we have a Bosch deployment for that. Swift, an HA proxy in front of Swift, a deployment VM which is the Bosch CLI and the cloud foundry CLI. So if you wanna run these in your production environment, we didn't want a non Bosch VM where the ops people logged in and then did stuff and it had to be like a Windows VM or something like that. So we just wanted a Bosch deployed VM. So we have a VM that just has CLI's and some basic ad users ad passwords. So I think this is sort of the base principle, right? So as a business, we want to focus on high level application microservices around content and process in case. And then we're building solutions on top of that and our customers build solutions. So we really wanna focus on building the apps and enabling to build these apps. But everything underneath it, first of all, we don't wanna own it, right? So the fact that we had to do something like this, that's because it was early, early days, but we really wanna contribute this to the open source community, yeah, because that's not our primary focus. Ideally, there would just be a catalog of Bosch releases and you could go out there and say, I need this Bosch release for Hadoop. I need this Bosch release for Swift and I could just pull it down from a common spot and it would have some version or some updates and I could just get whatever one and know that it's gonna work for me. And then we also did two service brokers, Cassandra and XDB. We didn't do a Swift service broker because we just basically have one Swift user and we didn't need to share the Swift cluster, but we're looking to the point of with adding more microservices that we're gonna need to share Swift a little more. So we'll probably be adding that over the next few months. Right, so let me tell you a little bit about the environments we have. So here you see the Cloud Foundry environment. So when we push something in production, then we basically have a full CI CD environment and so we, at any point in time, we can pick a green build. So here's the 803 example. And then we push that into a wide variety of environments. So one for functional testing as well as internationalization testing. Then there's a separate Cloud Foundry environment for performance testing and one for longevity testing. Then there's an integration test environment because our solutions typically are sort of a hybrid cloud structure. You have a public cloud environment that might integrate with a private cloud, single tenant environment. Then we do upgrade validation in a separate environment as well. And there's a pre-prod environment that mimics the environment of production before we push it in production. And these tests run in parallel and so we actually have the performance tests, all these tests, by the way, from end-to-end UI to performance longevity, everything is fully automated. And the performance test takes the longer, so about 12 hours. And then we've got predefined quality exit criteria for each of the test results. And we automatically lock the test reports into a portal as well so that we have a full history of how it meets the exit criteria. This is the test results from that particular push. And so it takes us about 12 hours from hitting the button to getting the bits in production live, which is an enormous improvement. And by the way, the longevity test, we can make that, we can let that run because in a 12-hour window, normally for our enterprise software, we do seven-day longevity testing to see whether there are memory leaks. So it sort of makes sense to have that environment just keep it up and running for a couple of days as well. And yeah, so you see the actual deployment. Like I said, we went in production, GA on August 1st. And so it took us a long time to build the basis as well as the first application. And so we did a bunch of upgrades. This is the actual deployment history. So in the first 27 weeks since GA, we did 29 full releases. And there were eight bus upgrades, two cloud foundry upgrades as well. 22 upgrades of the applications for each of the three applications. So that's actually 66 upgrades. And one stem cell upgrade, that was the shell shock issue. And so there were situations like, this is eight, nine and 10 October, three days, three promotions in production. Earlier this year, we went to weekly cycles. So on Thursday, the production, the dev team, the ops team, I mean, comes together and say, all right, does it make sense to push something in production, yes or no? And then they hit the button and then end of the day it's done. So really going to a weekly deployment model, and by the way, in between, somewhere on October, we rewrote the entire application. We went from EXTJS to Angular and Bootstrap in that timeframe and pushed the entire app again, built from scratch in a very short period of time. Right, so the real disruption here is the agility we're getting, right? So if you look at our enterprise software, that is deployed on-premise, we typically do 12 month release cycles. So every 12 months, we do a new release of our enterprise software. In our public cloud environment, we're at weekly and we could do it every day, but in principle, every week we push in production. And that sort of changed the game, right? So the model is now certainly different. The concept of a release becomes really weird. It doesn't make sense anymore to talk about releases because you push when you want to. Things like patch releases. So a patch train or patch releases are irrelevant because if you have an issue, you just fix it on the latest bits, you get the new green build and you push that green build with the patches and the new functionality that comes in. We also build in switches. So every customer facing functionality that is new, it has a switch so that we can push in production and maybe the marketing team isn't ready to announce that, right? So then we turn the switch off, but then we can turn it on to expose that functionality to our customers. And so the whole concept of a release and release management is sort of weird. The whole concept of roadmap is weird. You're sort of talking about more like investment themes and priorities in your backlog rather than releases. All right, so we're almost out of time. Any questions in the audience? Go ahead. So the stem cell is basically OS packaged up with a Bosch agent so that you can run Bosch deploy and it'll create the VM with that OS base image and then the Bosch can talk to the Bosch agent and deploy your software for you. So basically stem cell is the OS image that you want to run. And we're running Ubuntu, right? We're running Ubuntu. We just updated to 1404. I was just seeing that. All right. That's a good question. I need to share a word into Docker. So they're both containers, right? So we don't care necessarily about the container, we just care that our application is up and running. And we don't have developers running Cloud Foundry on their desktops when they're doing development. So when they do development, they're still doing a Java, Gradle, Build, Maven run kind of stuff on their laptops. And then they push to Cloud Foundry for the CI part of it once the code's committed and ready to run. So we haven't gotten to the problem of developers wanting to build Docker containers and run them yet, although we might get there over the next year. Yeah, there's sort of a fundamental difference, right? I mean, yes, Cloud Foundry can handle wardens as well as Cloud Foundry Docker containers. But there's a bit of a principal difference, right? So if you take your Dev environment, your Docker container, and you push that actually in production in the same environment versus Cloud Foundry's model where you develop your app and you push your application in production. And somehow I would be a little bit nervous pushing your Dev environment in a production environment, right? So you have to do a lot of controls around that. The moment that you deploy your app, you have a ton of controls. We have a ton of controls in our test suite in the way we deploy, et cetera. What is your tendency? How would you compare your approaches to which is more scalable? How much is more modern, I should say? More modern, I don't know, but so we do multi-tenancy at various layers, right? So every process in the engine, whether it's a microservice or an app server, they can handle loads from any tenant. So, and that gives you really nice horizontal scale. You just spin up more of those instances. The router, every request is in the context of a tenant. So we identified it in a key when we authenticate. So we know for every request, it doesn't matter where it's being processed in which particular node, we know from which tenant that is. At the database level, we partition, we have a single database instance, which scales out horizontally as well, and we partition the data, the metadata in a separate database instances within the same database engine. And we don't do that for, maybe you can explain, because then- So we made a decision early on that we didn't want to spend up services or applications for every tenant, right? Because we want tenants to be self-sign up. They can sign up for our service and they can start using the application right away. So we don't run Bosch commands or CF commands to provision new databases or new tenants or create more spaces, but we do have multi-tenancy built in throughout. So the application knows that URLs come from a specific tenant. It makes requests in the context of that tenant. The security at the services layer is based on the tenant. So there's a ticket associated with that tenant so you can only read data from that tenant. We make a secure checks at the security layer. And then in the database, we also partition things. So we create a database within XDB for different tenants. In Cassandra, it's all mixed together, but there are certain tables for certain tenants. And then in Swift, the data is partitioned into spaces for that tenant, so it's easier to manage. But it's a logical partitioning, not a physical partitioning. Yeah, and so because none of the processes are a database or any of the tiers, right? It's dedicated to one tenant. Now you've got full utilization of your resources. Does that make sense? Go ahead. So you're saying, what about the privacy or compliance issue? And we're in a highly regulated industry. That's actually a majority of our market. So we do see issues with, for instance, the Patriot Act where European customers don't wanna run in a US data center. And the other way around, US customers don't wanna run on a European data center, by the way. And so our environment, we actually have multiple data centers around the world. And so we replicate the entire environment in a local data center, to adhere to the geo boundaries and compliance issues. We don't see a real issue with having shared environment. So the world is rapidly changing towards shared infrastructure. As long as you can prove that there's real decent security boundaries around the processing of one tenant, as well as the persistence. And we can really partition the data for a tenant. Then regulated industries are actually fine with that. Go ahead. I like your testing pipeline. What's your position on it? Do you do any continuous resiliency testing, chaos monkey kind of stuff? We don't do anything like that today. We've thought that that was a good idea, but our bandwidth hasn't been in place to add that into the environment. But we have our pre-prod environment set up that's a mimics of production. Our thought would be that that is a playground where we could run stuff like that, so it wouldn't necessarily affect production, but we could monitor it just like we would monitor production. And then it's another Cloud Foundry environment with all the firewall rules and security and stuff built in. And so we could do whatever we wanted. Developers could actually log in there and fix problems or look at problems in real time. Whereas the production environment because of the EMC rules and a couple of the regulated stuff, developers can't log into the production environment. All right, we're out of time. So thanks a lot.