 All right, well once again, welcome everyone. This is my parents. I'm a product manager around the open shift Engineering team today's discussion is going to be centered around storage But before we get into that, I just wanted to thank everyone for their involvement in the commons Hopefully you saw the public announcement that we put out today. I went over our business wire so a lot of attention is now being brought to the commons and That is all because of you. So thank you for joining and participating in this forum like I said today's Arrangement or session or event is on storage. We have one of our lead engineers in this area Mark Transky on the line Mark is going to take us through what he's been working on for the last couple weeks where he is in the code and Hopefully show us Why we're going to be using this remote storage solution and how it's going to be implemented Mark, my name is Mark Transky and I'm an engineer at working on open shift implementation of persistent storage We have an hour scheduled for this part But lucky for all of us I can effectively explain our approach to storage and it doesn't slide this up Let's take about 10 minutes and after that we'll have a lot more time and offer questions And I hope I can answer them all effectively In the next few slides, I'll show you the user experience of cluster administrators and application developers as they create the consumed cluster resources specifically storage storage and open shift We need to allow administrators to describe the storage available in their cluster in a way that application developers can discover and request as a resource Meanwhile, we need to ensure storage and resource Reliable and not bound to any particular technology Do so allows us to be flexible enough to work in all data centers and environments As you reach and as important as your data, we pet the kettle Important question, I think There's this slide presentation out of stern where they're using cloud platforms to manage and crunch all the data Coming out of a large pager on the provider The last bullet is interesting I think because it suggests the hybrid approach It says we should aim for most of cattle, but still also warranted and needed at turn Just as they will be needed some time in the data centers that's going to have corporate clusters in part So how do we do this? If the admin provision a cluster with nodes and admin will provision a cluster with storage Many ways to script and automate resource provisioning exists that are being developed We need to automate as much as that as possible for it. We'll recognize them that some infrastructure In certain environments, they will be easier to get access to provision than others But it all begins with an admin provisioning the cluster Persistent volumes are resources with nodes. They are created by the administrator They are owned by the cluster and they are not named stable The persistent volume is backed by an actual volume that could be somewhere in the underlying infrastructure Persistent volumes are created using API like other resources But what kind of storage are available? I mean, there's all kinds of storage to be made to be supported Volumes are recommended as plugins and many plugins exist more often than development We've had it currently working on plugins for NFS, I've studied bluster I know that's being talked about and many others Volume plugins are natural attention points because if we can develop a plugin for something, we can mount that volume in OpenShift Just as a pod is a request for a site of computer resources The persistent volume plane is a request for storage resources Ain't really their namespace A value to create claims is also used to discover what kind of storage is available As described by size and the amount of traceability of the volume of the infrastructure In this example, the developers requesting a volume like to be mounted in two ways We write ones and we only mount This is an example where we can assume the developer created a UI And learn what kind of amount and traceability were available in terms of the volume In order to make that specific request It's important for that app developer to be able to rely upon that published behavior As you've seen an example in just a few slides Where you will then change the access mode of our pod volume Which is bound to available volume in the system Which means that it's possible for some plugins to go unmatched But when you do have a claim and it is bound to a matching volume You can use your claim as a volume in your pod We're intoning this actually as another kind of volume plugin Power plugins are an open shift to find the volume back in the client And make that one available to the pod The application developer has the same control as they did before They typically disregard the access mode and how the volume is mounted But now they're decoupled entirely from the actual volume itself They can focus on application development While admins manage the high availability of the cluster and its data This is important The system volumes have a life cycle that is longer than the pod that you use them You can delete your pod, but retain access to your data Because you still have your claim to your data Your data outlives your pod And the volume is being mounted on some streaming We are releasing our same claim in a completely different pod In this case, the volume is being mounted to 10 pods and a read-only mode To highlight how a developer can see the advantage of the multiple ways to mount some time to volume In this example, we load down from our data, say across the number of front-end web services But we'll load it, so that means it's free-loaded A free-load is going to be used first and read, write, once mode As it's in the pod And now use many times and we're loading many modes For many purposes behind a repetition disorder You release your data Now, if you release your claim to delete your data Releasing your claim will cause that volume to be recycled How a volume is recycled depends entirely on the type of volume it is And it's the latter Different volumes will have different lifelikes They work out To a developer, their data has identities to them But it's including a couple from the administration All aspects of managing a storage are left to admin But to an admin, storage can be either Dynamic provisioning with cattle storage while the duration management are viable for a year Hopefully, SHIFT supports both cases Both use storage for weekly Everybody's happy? Calculated ant parties? And that's how we're approaching storage and open-ship With these company developers from the infrastructure Having provisioned storage in the cluster Using the clever and consumer storage resources Hopefully, SHIFT removes direct coordination between it To the remaining flexible enough To open up all types of volume That's pretty much how we're doing it This represents the workflow The actual user experience for both the administrative side And the applications on the other side And hopefully, that's what you're telling me But it's our approach I hope it made sense I'm available to take questions Great, thanks Mark I just threw it more step-by-step verbally If I'm an admin I pretty much know how to get some storage from my array Be it an ice-guzzy volume Or a fiber-channel attached volume Most of the time, I would make that phone call And I would say, hey, give me 50, 10 gigabit volumes And I would get that pool associated to me What's my next step in the open-ship platform? How do I use those? Well, there are a number of ways to do that Because all of the volumes are created with the API And you go back to that page And provisioning to the cluster via the API We can create many, many ways We can go back no more We can create many ways To automatically create those resources Whether they are, say, Antibos playbooks that simply automate a genius app Or it could be a more sophisticated dynamic provisioning But you may expect it to play with more dynamic environments Like cloud environments Or a resource-assured web-level API For example, more power-like storage By the way, the admin up front Describes the resource And they're going to help automate That various way of creating the resources in the cluster So if you make all the resources imaginable You need to make sure you are tracking all of those IDs So that when you post them to the API They would have a specific mechanism They would all become part of the cluster Otherwise, you can take advantage of some dynamic provisioners That can automatically make those resources And manage them for us The reason you're playing that task Is you're tracking all of those IDs and whatnot So we will support both manual creation of storage As we do it With the automatic creation of those dynamic provisioners Okay, so on the next slide In that stanza that you were showing me That's where I have to describe When I'm giving it to the iSCSI or the Fiber channel In this block That's right In this spec, if I actually analyze You see a source normally before it says The last block there is a source attribute That is something in mind But the specific thing being This represents an actual Whereas the damages The application developers have claims From some initially coupled thing To a volume The added thing is the actual So there would be a plug-in For iSCSI, a plug-in for HANAFS A plug-in for what ever type EBS is in the case on the slide A plug-in representing each volume type For the new component And then the Automatically or in an automatic fashion To read all the data And post it into the API And this definition is at the pod Level, right? No, this is its own top-level object The system volume is outside of a pod They have identity and they have longevity That outlasts the pod So we first create the project It now lives all by itself And then later We use that same volume In a claim The claim here is not from your side As a pod We use our claim In the space of the volume Our implementation Of our claim To find and match the real volume Of the project to the pod Very nice, so it's a claim A sort of demand situation And then how does this pod Then give it to the docker container That's looking inside it That is all the same As today and OpenShift works today We do define it as a volume plug-in Keyboard learning on the node Will mount it to the pod Attach an amount from the cases And of course it's loaded to the docker All that is unchanged We are just making this feature With our way of creating and managing Many volumes And providing that indirection To the claim to be more persistent And then all these are shared Stores to begin with Either NFS volumes or iSCSI volumes Or Fibershield volumes What if I fail my pod Or evacuate a node or something How does Kubernetes Bring my storage to my new node For me? Great question There are recognition moves Everywhere Throughout the project You will obviously be unmounted And so on, you can attach it to your pod Attach it to your brand We pose for a few worlds And we recognize the value of volume And because the Kubernetes volume Didn't follow your pod anywhere And you approach it to the net load That Keyboard learning on that load Will run the volume plug-in The volume plug-in knows how to Either attach an amount And so on, and all of things Specific to that type of volume Roger And then this Is obviously at the fact That you already have a node up You already have Kubernetes Up or OpenShift Up How does the What's the story around that The host level for his shared storage Like what if He's made out of shared storage to begin with Or local storage We gotta follow that question Jeff, if you could unmute your line Maybe you can represent the question better Jeff McCormick I see Jeff's question In the little chat window About local storage Local storage was Quite specifically Not implemented In this particular Local the handle that can be Completely different Now that said, if we wanted to define Some volumes that are There are local volumes to host pass For example, the local volume And if we wanted to We could make it such that The cluster knew about all of them As persistent volumes And if we're interested in Your part on the same That has your data But all of the mechanics Of making that data Highly available and local and so on Where Are very complicated So we're trying to Do nothing Very specifically Kind of punk on all that complexity In whether with the network Such as A lot of use cases About This particular feature Can you talk maybe a little bit More about the developer How do you think we're going to expose this claim Process to him I now have my application Out, I suddenly decide Hey, I want some storage Is there like an OSC command That I'm issuing Is it First, the discovery Of storage is important If we were to regard to The types of the Hades Mount modes that are available We couldn't know about that in a lifetime So first, I expect the developer To go to the cluster To see what kind of storage Is available in general The amount modes, the access modes And the size Other attributes might be the IOS For some of the things we've cleaned By the API, the cloud providers But the important ones really Are the access modes and size So if the developer can see where it's available They can make a request That suits their needs They first discover what's available And then post their actual claim And just as they report the pod And watch it go from pending To running state They can watch their claim Go from pending to found state And when it's found It will be an actual And then back to their claim In this example The developer said, I just need 3J The closest size Volume that's in the cluster might be a 5J Volume, for example Since what happens on only what they ask for Maybe we'll give them the 5J And make them there for a little bit Their specific claim and say, okay I've got my two mount modes, I've got a 5J Storage and they're happy Once they see their claim has been satisfied And bound to a real volume And that claim adds their volume That's awesome So the shared storage That gets attached to the pod Gets mounted in through That docker container At that point it's just A mount directory in the docker Container that an application Could use to write what it believes To be persistent storage That is actually living remotely Right That's exactly right That's awesome I mean that's Fantastic Yeah, this example here There's the WDW The fact that it's actually mounted On the host as FlashNFS FlashSensePageRaisers FlashRecares The pod sees it as Mounted in the WDW WDW And not really So that will probably open The door to a lot more applications Running on a Kubernetes cluster It won't just be H-E-D-B 12-factor, which it'd been totally catered to But it also opens the door To a larger breadth of possibilities That's right, persistent storage For database and whatnot It's been one of the most requested features upstream So we're actually writing for open source To a lot of the different stuff Now what, um Go ahead Somebody have a question This is Judd, yeah About my shared storage question Are there any Nouns added to Shared persistent block storage Like an NFS or an iSCSI To help the NFS Or iSCSI systems manage Where and how things are mounted What would a typical Like mount Or iSCSI mount String look like Does it change at all? That's a great question And I'm not sure I exactly know Off-hand right now But that's the NFS Plug-in that For San Fonso Which is currently On It has Server address And a name And a mount dot add option So you can read only or read write and so on So thank you for the support Of what NFS can do Judd speaking, there aren't that many properties required To make this work I'm not really sure what you mean About what it should look like I'm not really sure exactly But there are only three properties we've got for NFS So I don't think it's going to be Thirdly complex I'm not answering for you I'm sorry I'm sorry Do you have any requests for Like Galera or GFS Because I remember there's all these fence d's And stuff that you've got to have running And there's a certain amount of quorum That you have to establish Is that on your roadmap? Or does it matter? Right It's not a nice specificity And I've not heard of certain We were in OpenShift But that does not mean we can't be supported I'm already working with people who are not on OpenShift But if we're working with that We're working on plugins for Many of our recent storage So if we can create a plugin for it It can be mounted in OpenShift Where and how that plugin Can develop, I don't know My responsibility currently Is to make this volume management Framework a different place NFS was a story that We were using that as not a cloud volume As our first Present communication for this week After that we're leaving it to others To do high-previve and go up to it So I want to use it to work out What kind of map I can create Yeah, that's a good point Marty, you're putting back A lot of pull requests to Kubernetes itself Right around the Around that layer And we're really working on the user experience On how to call These claims, right? Right, this is all going up to And then when It comes back downstream into OpenShift There's lots we can do In customizing ways that Google Might not be interested in And the automatic provisioning Of say, you know, 300 Elastic lots of volumes That might be a great answer to play with Or if someone had a great, great Pumping NFS to a file server On their network in a corporate environment And we can probably make We can make Automation steps to play both So what have you, that will carve that up And expose them all as volumes in the process Either way, there's no way That we can automate Or you shouldn't have these things And likewise, we can create many more volumes Plug-ins before trusted One struggle in this area Is when you want to have Two separate entities being able There's the same storage NFS, you know, allows that At the file system layer Some other technologies like Gluster Have locking mechanisms Have you started to run into that at all When people want to have Three pods all Writing to the same volume? Yeah, and I think Just a question in the chat About blocking across pods We're pointing on that But I don't think it's going to be Responsibility of either Kubernetes Or OpenShift Developers want to have Read write many And I bet they're going to have to handle All locking and detention on their own Well, they could potentially Come in through the Gluster file system Plug-in or the A 3.1 release of OpenShift Right, we can't get better Okay, I've been talking a lot Any other questions on the line You can just unmute your phone And feel free to ask any questions Is there any demand out there Any demand, you know this sounds Completely ridiculous, but my boss Is going to ask about it Any demand for Cinder out there? Cinder, yeah OpenShift What Cinder's been talking about as well Because I believe, as I recall Cinder has been managed in OpenShift I think they've talked about as well Yes My particular boss Comes from the storage world And he's running our OpenStack team now And he'll be very interested In Being able to leverage all the work That we've put into Cinder I would tell to make our devil storage Deer available Could be very interesting Sure I thought it might be completely ridiculous But maybe he would Use that product But Cinder Will allow OpenShift To access a variety of different storage I'm sorry, are you going to send Cinder back to the processor space To any kind of storage space? Yeah If a volume is plugged in out of it We can mount it So whether it's a single Cinder plug-in And that represents many things behind it That's great Or whether OpenShift itself That's great too Either way, I think it's a great flexibility The other part of it Is the provisioning The volume plug-in Without Kubernetes Will mount this volume and expose it to a pod The provisioning of that volume Is the other half of this equation So if we wanted some automatic means Of having OpenStack Be a dynamic provisioner Which is certainly handy That these limitations Will place first the plug-in To make it work and have it mountable Which means all of the volumes are created Initially manually By hand But the very next step The dynamic provisioner of that type of volume Will make it automatic To make it more like that This is Joe from the OpenShift team Just to add to that We had a similar conversation last week On the comments called related to networking And how OpenShift is going to be able to Use native SDN capabilities But also plug-in To third-party SDN solutions Through Kubernetes OpenStack Neutron is Someone that we're working on there On the networking side Similar to the storage side For OpenShift on OpenStack Scenarios which are very common We see all the time Having that plug-ability To sender or to Neutron To let the infrastructure layer The IADD layer manage Storage and networking And then plug-in to the various options That already are plumbed to that layer So I don't know, we're definitely Very conscious of them working together With our OpenStack team around Mark, can you forward to the slide Where you call out the plug-ins That The community is working on And maybe after the meeting You can forward me the URL To that part of the Kubernetes Project So in case people want to go and look at it Sure, there are a few questions currently for NFS And I said I know what has been worked on But it's not a PR I think to Kubernetes for it And like with anything else to be created I'll need to get with Clayton And Demi Chris and some new guys I'm sure there are some new ways too We can get lots of new plug-ins And positions and so on So we all set it up stream And currently we're making all of stream For the feature and all the plug-ins and whatnot But there's no reason that really has to happen We, of course, are working together And we can expertly support All the storage that That can also make it work seamlessly You know, from chip Great, so Jeff or Judd or Keith And Nick, any other questions For Mark, Cindy, Tom Only question is If we've got more questions, can we reach out On the OpenShift mailing list To be monitored Do you think that was a question? Yeah, all of the OpenShift I would be available For any questions you may have Excellent All right Awesome, thanks Once again, thank you for joining the comments This will be the last one in February As we look into March We'll have more exciting things to present to you If you're in the northeast Stay warm And we'll talk to you in March Thanks everyone