 My name is Julian Yortsoy. I work for the Dell EMC Cloud Platform team. Most of our team is engaged in the Cloud Foundry Diego Volume Services project, which is what I'll be talking about today. Some of you might have already heard some of this, because we kind of go over what exactly is volume services every time. So if you have apologies, I try to make it a little bit shorter every time. So quickly, what we're going to talk about is just what our volume services and how they work in Cloud Foundry, what we've done since maybe the last time you were here. We last presented in Europe at Frankfurt, so we'll talk about what we've done in the past year, what we're planning next, and then page full of resources which you can pick up off of the slide deck after I post it after the conference. And we'll hopefully have some time for questions. So what are volume services? Exactly. Volume services is a framework that allows arbitrary sources of volume data information to be mounted into Cloud Foundry containers. It's also an initial set of implementations for that framework for things like EFS, now Azure File Service, Dell EMC ECS, and existing NFS shares. So just looking at how this works at a very high level, this slide's a little out of date because we no longer have a CC bridge sitting between Cloud Controller and BBS, but the overall gist is still correct. Essentially, this is what Cloud Foundry looks like with no volume services. You've got the Cloud Controller managing things, and it's negotiating with BBS in Diego to place workloads onto Diego cells. So if we look at what volume services adds to this picture, it's a service broker. And you may or may not have gone to some of the talks about the open service broker API, but that has a mechanism in it for service brokers when you bind a service to an application to specify that that binding should have a volume mount, one or more volume mounts associated. Then typically, it's some network volume, which may or may not be provisioned by the service broker. In the case of our NFS one, we just attached a random NFS shares out in the world. But in some of our other service brokers, we actually do the provisioning from the service broker. And finally, a volume driver, and this is a piece that's co-located on the Diego cell. Oh, something's wiggled. We're back. So the volume driver has to be co-located on the cell because that's the piece that actually mounts the volume into the host namespace on the cell so that it can get attached into the container. So how does this create a volume mount in a Cloud Foundry workload? Essentially here, you can see when you create a service instance, the Cloud controller talks to the volume service broker and makes the create service instance called the same one you would for a database or anything else. If it's the kind of service broker that creates a volume, then it'll go off and do that. If it's our EFS broker, it'll go talk to Amazon and tell it to make an elastic file system, for example. Or maybe it just records some information. And then the Cloud controller asks the service broker to bind that service to an application. The service broker comes back with a packet of information that's a volume mount. Hands it back to the Cloud controller. We stashed it away in the Cloud controller database for later. And then when the workload actually starts up, that metadata gets passed through the wrap to our volume manager. And that talks to the volume driver, which is the next piece of the system that knows anything about what this blob of service binding means. And that volume driver goes and does amounts of some sort into the cell OS. And finally, that results in that mounted volume getting passed to a garden when the workload has started up. And garden already bind mounts a bunch of stuff into your container namespace when it starts your app. So we just add that mounted volume into the list of stuff that gets bind mounted into the container. And off you go, it's magically there in your container. So that's kind of roughly how it works. That hasn't really changed in the past, I don't know, year, 18 months, something like that. So what's new? That's supposed to be what this talks about, right? The biggest one is the container storage interface. And some of you might have heard my lightning talk about this yesterday. But if not, the container storage interface is a project that was started about a year ago and has been in active development since early this year. It's a collaboration between all four of the major container orchestrators to come to a standardized interface for volume plugins. So volume services, as you've been maybe using them already in Cloud Foundry, are a combination of an open service broker and a Docker volume plugin. We picked the Docker volume plugin API because it was the only thing resembling a standard at the time that we started this work. But even at that time, we were wishing that there was something a little bit more formally standardized as opposed to just kind of cribbing what Docker did. So that's kind of what this effort is. The goal of the container storage interface is to provide a consistent interface, just like you have with the container networking interface or the open service broker, that will be supported across container orchestrators. And that eventually will allow somebody who's providing volume services to create a single set of plugins that work in Kubernetes, in Cloud Foundry, and Docker, and Meizos. So we've been at this since early 2017. It's now kind of at pre-alpha state. Basically, the container storage interface, similar to what we have in Cloud Foundry, has two components. One is the controller plugin, and you can think of this as sort of a subset of what a service broker does. The controller plugin is responsible for provisioning volumes, and also in the case of things like block volumes, it's responsible for doing attachments between the volume and the VM. So again, if you think of an elastic block volume, the controller plugin would be the bit that talks to AWS to say, hey, attach this block device to this VM. The node plugin is very similar to our volume driver. It has to be co-located on the cell or the VM that's doing the actual work of starting up containers. And that's the one that's responsible for mounting, just like we do in Cloud Foundry volume services. The real benefit from our perspective is it's a lot like what we've got today, but it's standardized. So particularly, if you think about a landscape where you imagine Cloud Foundry co-located with Kubernetes running a set of shared services maybe in the same Bosch environment, you can also imagine that you'll have volume services that work across those two runtimes. So this, like I said, is the way that volume services work in Cloud Foundry today. This should look familiar. We just looked at something that looks very much like this with a service broker and a driver. In the new CSI world, as we've implemented it today, we have essentially a generic service broker that's capable of consuming CSI controller plug-ins. So anybody's random volume plug-in and adding a catalog layer on top of that to facilitate user interaction in a way that makes sense for Cloud Foundry. And then we have a node plug-in taking the place of the volume plug-in, the Docker volume plug-in that we have today. So in one case, we have a service broker sitting on top of the CSI component. In the other case, we have the CSI component replacing the component that we would have previously run. But they serve pretty much exactly the same functions. So the service broker will call into the controller plug-in and ask it to provision volumes. And the node plug-in will take a packet of metadata coming out of Diego and turn that into a volume out that gets consumed by Garden. So CSI support in Cloud Foundry is currently experimental. CSI is still evolving, so it hasn't yet stabilized. We're still making breaking changes to it. And because only shared volumes are supported in Cloud Foundry, today we only support CSI shared volumes. So block devices can't really be attached to containers in Cloud Foundry because we lack the scheduling guarantees that would make it safe to attach a block device to a container. So the same constraint applies to CSI as well. CSI for Kubernetes or Docker will allow you to attach block devices. And in part for that reason, there's a controller publish volume endpoint in CSI that's typically where you would do attachment. We don't today support plugins that have that endpoint because we don't have a good hook. Essentially, we don't know from the context of a service broker in Cloud Foundry when we bind a service to an application which cell it's going to end up on. So from that context, predict where storage should be attached to a VM because we just don't know. Essentially, it goes into the auction process in Diego and gets randomly placed on a cell. So we could eventually hack around that. But at this point, it didn't seem to make sense. And the last point here is the testing so far is limited to local volumes because that's the plugin we created. And so we put that through our pipelines. So moving on to perhaps more practical matters, very recently, we've added Samba SIF support. This actually came out of the Azure development team. So they've built a SIF Samba release that allows you to provision Azure file services in an Azure environment and attach those to your containers running in Cloud Foundry in Azure. They're working on something a little bit like our NFS one where you can bring your own SIF shares and attach to those just like you do today with our NFS broker. And I think they're also building a tile for that. But you can try it today. It's in a Cloud Foundry Git repo. And it's all working. Something else to note is we've added support for CF deployment. So if you're running open source Cloud Foundry and you haven't tried CF deployment and definitely want to do that, it's miles better and easier to deploy than the old CF release CF Diego model. So most of our volume services now have ops files that will allow you to just do a CF deployment and mix in, for example, NFS file service ops file. And then it'll create your CF deployment with all the volume services already set up. It saves you having to do manifest generation and all that other yucky stuff. So if you are still running CF release and Diego release and deploying using manifest generation the way you used to, the other thing we've added in the past year, this isn't really all that new, but I will left it in because it's new since Frankfurt, is the ability to use Bosch add-ons to deploy the driver piece. Because the driver has to be collocated on the Diego cell, it's annoying to mix it into the Diego manifest, especially if you don't have ops files as a mechanism to do that. So Bosch added a nice add-ons feature using Bosch runtime config that allows you to target VMs that have a specific job on them to place other jobs on. So we use that mechanism. So we say like any VM that's running a rep on it should also get our volume driver. So that's a good mechanism to use for now. I believe Pivotal Cloud Foundry and Opsman are going to adopt this mechanism in order to be able to collocate things on Diego cells going forward. Also new from like a year ago, PCF Dev has local volume release in it. So if you're looking for a way to just try out volume services, you want to create an app that consumes file services that persist across deployments or restarts. PCF Dev is definitely the quickest way to do that because you can just fire it up on your laptop. Local volume release is already in your CF marketplace. So you can just create a service instance and bind it to your app and off you go. And that same app is basically relying on the existence of a directory in the cell environment. So you should be able to write an app in the PCF Dev environment, and then turn around and deploy it to a real cloud with a real volume service. And it should just continue to work. POSIX UID mapping and NFS. This is basically solving this sort of fundamental problem with NFS and Cloud Foundry, which is to say, when you push an app, particularly if it's a build pack app in Cloud Foundry, your app is always going to run as a single arbitrary UID. And most of the time, you don't really have to care about this because most of Cloud Foundry doesn't care about your POSIX UID. But because NFS is old, like really old, NFS relies on UID to figure out who the user is that's doing stuff. So if you imagine an environment where you have some shared volume, and maybe it already exists and you were accessing it from some workload on a VM someplace, or maybe you just created it but you want to access it from multiple applications with heterogeneous access control, having an environment where you don't get to say what UID your app appears as is annoying. So we added a mapping layer into our NFS mounting so that when you bind your application to a volume, you can specify a UID. And we'll intercept the NFS protocol and dub in the UID that you want your user act as. And then on the flip side, we dub that back so that by the time it comes back to your application, it looks like the files you created belong to the running user. We also added LDAP as a way of locking down that mechanism because if you configure Cloud Foundry to just allow people to specify a UID when they run their applications, that means you're trusting the application developer to be the UID they say they are. If you don't have that level of trust, you can lock down what UID your users get to act as by connecting an LDAP server, in which case they have to provide LDAP credentials. And then we'll go look up that user in the LDAP, your Active Directory server, and find out what their actual UID is. So this is a good way of kind of coordinating between your Cloud Foundry environment and whatever your old NFS server is, which is typically locked down with LDAP. So file provisioning support in ECS may or may not be aware, but there's a service broker available for Dell EMC Elastic Cloud Storage. That's primarily an object store, so the first version of the service broker allowed you to create buckets in ECS and bind them to your applications. So then you use the S3 protocol to go add buckets, flee buckets, or add blobs to buckets using the new modern way of doing things. If you're writing an app from scratch, then that's definitely the better way to interact with ECS. If you have criminal code that requires NFS or file access for some reason and for whatever reason, like it's an old library or you're working with other apps that require file drop boxes, then you might want ECS to act as a file store. So we've added support so that when you create a bucket in ECS, you can specify that I want this to have file access, and that'll turn around and enable file access on the bucket, create a user in ECS, map that user to a particular UID, and share it out as NFS, and then we mount it using the same volume driver that we use for regular NFS, attaching as that specific UID. So other stuff in the volume services world, it isn't new. Local volume release is just a nice way of getting it to start, particularly if you're running in Bosch Lite or a single cell deployment. This just provision stuff off the Diego cell file system. It's not really recommended for doing any real, but if you're just experimenting, it's a quick way to experiment because you don't require a file server or anything. It's also a good reference point if you're thinking about making your own volume plugins and you're thinking about doing it in Golang. And then EFS volume release, talked about this a little bit already. This is a service broker and driver for attaching elastic file systems in an AWS deployment. So coming soon, stuff is on our roadmap, some enhancements to NFS. Right now, we don't support NFS for, and we don't have good locking support in NFS. Both these things are a bit tricky for some people, particularly if you're looking for better performance. So we're looking to enhance that release. We're lining up some enhancements to the EFS service, including support for multiple AZ deployments. Right now, we only create a mount point in a single AZ around a single subnet if you're deployed across multiple AZs. That means that only if you get lucky in your app placement will you get good performance between the application and your elastic file system. So we're looking to expand that out so that you can have mount points in multiple AZs and get good performance no matter where your app lands. And a CF pushable service broker. Today, the EFS service broker deploys to a VM. We found a lot of people don't like that because it's just another VM and another IP address that they have to provision. So we're looking to fix that. Windows support is coming. We started doing the initial spade work for this a couple weeks ago. But we're looking to essentially take all of the same capabilities that we have in Linux cells and apply them to Windows cells so that Windows workloads can mount SIFS volumes and NFS volumes. And then, finally, support for other volume protocols. I mentioned this already. For SIFSamba, we have coming very soon existing shared volume support for SIFS in a Linux environment. And then potentially based on requirements, there's a bunch of other protocols that we can add support for like SSH or FTPFS or HDFS. So just some quick references. If you're looking to try it out, like I said, local volume release is a good first step. And you can pull down PCF Dev here and get up and running with local volume release already installed. For further information, here's a whole bunch of links. And again, I'll publish this slide deck so you don't need to take a picture of it. This has our Slack channel, our CI pipeline, Tracker backlog, various releases, so everything you need in order to kind of get started or get in touch with us is here. If you have any interest in making your own service brokers and drivers, these are links to the foundational specs that you need in order to do those things and some examples. It's pretty straightforward. So if you were all inclined to do this, then we'd be happy to work with you to get going. And whoop, oh, it's like definitely if you're thinking about making a volume driver or something, let us know and we'll work with you. So that's all I have. Looks like we've got time for some questions. If anybody has any, yeah, go ahead. What about your slides will be available? Yeah, I haven't published them yet, but I'll poke them up there in the next day or two. So if you look back on the Cloud Foundry schedule, you should be able to find them. We were talking yesterday about Kubernetes becoming container runtime for Cloud Foundry. And application runtime basically is obviously garden branchy in every case of deployments. Kubernetes exposes stateful sets for pods with persistent storage. And volume manager, which we are using ourselves, basically does not the same thing technically, but is the same functionality. And I was just confused by having in the same user experience with Cloud Control and everything else, having two ways maybe to deal with persistent storage. Because we told to our users all the time that, apart from specific use cases, going to containerized application was the best way to have persistent storage. So how do you plan or is there any plan to combine with persistent storage from Kubernetes? Yeah, so a couple, I think I heard that question and I heard them not really answer it, a couple notes on that. One is, you might have noticed that we really only support shared volumes in Cloud Foundry. And at various times in our kind of what we're going to maybe do next slides, we've talked about adding block storage or single attached storage to Cloud Foundry runtimes. I think now that we are in this new living in the new world with Kubernetes landscape, that's probably not going to happen. So essentially, Cloud Foundry persistence will be limited to shared volume storage. And there's a class of applications for which that kind of makes sense, where maybe you just need a simple inbox type interface for your application where you look for files in a place. And then if they show up, you pick them up and do a thing to them and you put them in some other place, right? And there's tons of applications like that. And for those applications, if all you need is just an inbox, then Kubernetes is kind of a bit heavy and sticky for you. So the application runtime might still be a better choice. So I don't think we will discontinue support for it. So the other note I would make is that as we move toward the container storage interface, we'll have a much more uniform interface to the volumes of services that we support so you can imagine them being available in the same environment along with Kubernetes. So if it's a shared volume, you may very well have the same shared volume mounted by Kubernetes and Cloud Foundry through the same service broker. So it isn't really exactly an answer. Yeah, yeah, absolutely. We don't have any plans to discontinue it. In fact, I think we continue to enhance it. We're actually seeing, in spite of the fact that Kubernetes is now a thing in Cloud Foundry, we're seeing quite a lot of traction for shared volumes with the elastic runtime. So anything else? OK, thanks very much. Thank you. Thank you.