 Hi, I'm Julian Jortsoy. I work for the Dell EMC Cloud Platform team. Our bit of that team works in San Francisco primarily on the Diego Persistence project. I'm going to talk to you today about the Container Storage Interface. Those of you who don't know, Container Storage Interface is a project that was launched roughly this time last year between Cloud Foundry, Kubernetes, Mezos, and Docker to try to standardize the way that storage is attached to containers in container orchestration systems. We've been working off and on. It faltered a little bit last fall around the holidays and picked back up again in the spring, and we've been going full tilt on it since early this year. So why did we do this thing? We all of our various orchestrators need a way of attaching storage to containers, unless your app's totally stateless, which ideally, it should be a 12-factor app and all this, but sometimes it's not. So essentially, we've all evolved to the point where we find we need to store some stuff in some kind of volume. Likewise, storage providers have kind of sensed in the wind that all of the workloads seem to be going toward containers, and they all dearly want to be able to attach their storage to containers in places, and they don't really want to have to do that once per container orchestration system, so they don't want to have to build the integration with Kubernetes and then turn around and make a Docker volume plug-in, and then turn around and make a Cloud Foundry service broker and that kind of thing, and so on and so forth. So standardized API, similar to the open service broker API, or some of the container networking API, gives us a way to allow storage providers to write one plug-in set that works with all of the various different orchestrators, and that way it can kind of work together, and if you're running multiple storage providers and multiple container orchestrators in the same ecosystem, they can all just hook up to one another. So digging into a little bit of how the API came up with works, CSI proposes that there should be two plug-ins per storage provider. Essentially, it's a division of labor. The first plug-in will sit at the orchestrator, the controller layer, similar to a Cloud Foundry service broker. It's responsible for provisioning and attaching storage to a node, and then the node plug-in is guaranteed to be run on the same host as the workload, and that's where we do things like mounts or stuff that has to happen on the local node. So in the case of Cloud Foundry, that would be the controller plug-in running who knows where, and the node plug-in running probably in multiple instances, one on each Diego cell. And if you want more details on that, the spec is at this link. So the basic kind of control flow, this is quite a bit simplified, there are more RPC calls, but the gist is that the container orchestrator, in our case the Cloud controller, is going to call this plug-in, is going to ask it to create a volume when prompted by some user action. So that calls the create volume RPC, some storage gets provisioned, or maybe your storage already exists, and this is a no-op. The container orchestrator doesn't know, but it knows that it has to call create, and you get a response back with a little bit of metadata. Then when it's getting ready to place this work on a particular node called publish, if it's something like an NFS mount, then that will probably be a no-op, because there isn't really anything that happens at the master node for an NFS mount. If it's something like a block device, then that's where we would be doing attachment of the block device to the VM. So it kind of depends on the kind of storage, and then finally we call node publish volume on the Diego cell or on the worker node or whatever you want to call it. And this is where all the node specific stuff happens. So if it's an NFS thing, then we would do an NFS mount or if it's SIFS Samba mount, we'd do that kind of thing. It could also be that it's a block device, but it requires some care and feeding in the context of the VM that it's going to work on. So ISCSI initiation would happen at that step. So just to walk you through quickly what the interfaces are in question. So there's the create and publish that I talked about. There's also some things about kind of checking volume capabilities to get a little bit more information back and forth. Again, you can go dig around in this back if you're interested in the details of these. So that was the controller. The node is a bit simpler. It has this additional probe node that just checks for OKness and get node ID which allows the node to identify itself back to the controller. So for example, if you're attaching an EBS thing to an EC2 VM, then you need to know something about what the EC2's instance ID is so that you can talk to Amazon to get those things attached. And then finally we have this identity service, and this is just a couple of RPCs that will happen in both plugins so that we can do things like check the version. So when is this going to happen? Kubernetes is looking at their version 1.9, which should be around the beginning of next year for the first experimental version. I should say since the spec is pre-alpha, it isn't yet finalized. It's still quite in flux. So by the time they publish something, we'll have something a little bit more locked down where we start guaranteeing backward compatibility and that kind of thing. Mezos, again, is looking at around about the end of this year. Moby aka Docker hasn't yet decided what version they're targeting. And in Cloud Foundry, you can have it today. In Open Source Cloud Foundry, we've already added a POC level implementation of this feature. So if you want to go right in your own CSI plug-in set, and it's not a block device, you should be able to use it in Cloud Foundry with this feature and definitely let us know if you're going to try that because we'll be happy to do a little hand-holding. Just some quick resources. I'll publish this slide deck. And with that, I think I've run well over my time, so thank you.