 All right good morning folks. My name is Bob Callaway. I'm with NetApp and I'm kind of going to be your emcee for this morning. I want to quickly start with a who's who of who we have up here. So we'll go down the line and introduce folks. Hello. So hey I'm Robert Esker. I'm the Prime Manager for OpenStock and NetApp. Hello. I'm Tashad Katarki. I'm from Red Hat. Hi I'm Greg Elkenbard. I'm seeing a technical director from Mirantis. Hi I'm Xinyang. I'm a technologist from the city office at EMC. Hi good morning. I'm Shumal Tahir, EMC. Hi I'm Ben Swartzlander. I am the PTL for the Manila project and I work for NetApp. Hi I'm Dean Olbrand and I'm a research manager at IBM Almedin. So as you can tell we've got a wide variety of vendors some would consider us friends and some would consider us foes but the point of the project here is that it's not a you know Manila is a community effort. We've got a wide variety of folks even outside of the folks we have up here on the virtual stage I guess that are participating within the Manila project. So we wanted to give you a very cross vendor view of not just what the function is but what different companies are doing to integrate in with Manila. So we'll give you an overview of what is Manila from a very basic level. We'll quickly jump to a demonstration of walking you through a fairly simple use case of creating shares, attaching shares, whatnot. We'll talk about the API kind of what are the key concepts and constructs that people deal with. Ben will talk about kind of the little bit under the covers what's the architecture look like what are some of the challenges that are unique to a file share as a service offering. Then we'll go through each of the vendors and they'll talk through some of the details of how they've integrated their products within Manila. Give you a little bit of kind of where the project's at where we're going to be going during the Juno time frame and then hopefully have time for a few questions at the end. So with that I'll turn it over to Jing. So what is Manila? Manila is a project that provides multi-tenant secure shell file systems as a service in OpenStack. Just like Cinder which provides persistent block storage for instances, Manila provides shell access to shell file systems for instances. There are some commonalities between Cinder and Manila as both are dealing with storage. However, there are some significant differences between the two. While Cinder does not have knowledge about networking, Manila needs to know a lot about individual tenant networks. Cinder relies on hypervisors to manage attaching, detaching volumes. Manila on the other hand routes around the hypervisor and it connects the storage with the instances directly. So Manila needs to handle user access control and permissions assignment and it needs to deal with security and authentication. Manila supports NFS and SIFS protocols today and more protocols will be supported in the future. So here's a diagram that shows a Manila use case in OpenStack Cloud. There are two requests coming from the orchestration layer. The first one wants to provide access to guest 1.7 to the existing R&D file share. And the second one wants to create a new file share for marketing and provide access to guests 6 and 8. So Manila enables an automated process. It makes it easy for guests to have access to the file shares through isolated tenant networks. According to the 2012 IDC report, the file storage accounts for over 65% of total disk capacity in 2012. And file-based storage continues to be a thriving market. Spending on file-based storage solutions is expected to reach $34.6 billion in 2016. Without automation, it is very difficult to provision and manage share file systems. And customers will have to work around the problem by scripting or automation themselves. Manila, which provides share file systems as a service, automates the process and it makes it easy for guests to have access to the file systems. It facilitates the management and provisioning of the share file systems in OpenStack Cloud. Manila has become an independent project since June last year. And since then, companies including Nada, Miranda, Red Hat, EMC, and IBM have been working on it. And lots of progress has been made since then. If you're interested in Manila and want to learn more about it, you can visit the OpenStack Manila Wiki page. Manila is ready for download today. It's available at this website, github.com, Stackforge, Manila. You can also join us on IRC, and the IRC channel is OpenStack-Manila. So now what we'll do is we'll show you a quick demo of what it looks like. So we'll kind of cover the high-level concept of what it is, but I always find it useful to perhaps see it in action. So again, we're starting with this scenario where we've got, you know, the desire to create a new share as well as provide access to existing shares. So you can see here, we've got two shares that already exist, both a finance share as well as a research and development share. They're already done within the same construct within Horizon that you would typically see volumes and instances. So it looks and feels as if it's, you know, a native object within the overall OpenStack framework. So what we'll do is since we already have a couple shares existing, we want to provide access to those shares to particular guest instances. So we'll look at guests one, six, seven, and eight. They have IP addresses of 10, zero, zero, two, four, five, and six. So the first thing that we'll do is we'll click over to the shares tab. We'll look at the shares network. You'll notice here that the share network that we've defined is how the NFS and SIFS exports that are exposed by the back-end storage subsystems are actually then tied into the networks that are utilized by the instances. So in this case, we have a neutron network called private with a private subnet. And then we'll quickly switch here over to the graphical view for neutron showing that the instances are hanging off of the same network. So this is how we're going, when we create a share, it's going to be given an IP address that's addressable over that network. So now what we need to do is we need to go add access for those IP addresses for guests one and seven. So I mentioned that's 10, zero, zero, two, and 10, zero, zero, six. So we'll let the animation go here for a second. So we'll click on the more button here, click on manage rules, which is where you define the IP-based access. So we'll type in the first IP address, click apply. We'll see that it gets created and then goes to an active state. The interesting thing to note here is that the creation of a rule is immediate. Then it goes back to the backend controller to have the access rule actually set. So it's being enforced at the storage subsystem, not at the instance level. So we'll go in and we'll add the next IP address. We'll click add. The same entitlement is set on the storage system. And then what we'll do is we'll actually log into these instances. They're all running in this particular case, a DevStack node. So we'll log in. We'll create a directory to mount the NFS share. So we'll call it research data since we're attaching the R&D data to that. We then need to know what the mount string to pass, the mount command that has the IP address of the share as well as the export path. So we'll go back to the shares menu, click in the detailed view, and we have the export location that's there. We'll simply cut and paste that in. Give it the directory of slash mount slash research data. We'll click enter since we've already given access to the IP address of this particular instance. It's immediately mounted. There's no other steps that are required. This is how you would typically do this. You could go and put this in FS tab and deal with it auto-mount her, et cetera, at this point because you have established the linkage. You can see through DF that the share exists. It's got a certain amount of blocks that are already allocated to it. One gig was the size that was used. We'll now log into the, oh, it is going backwards. Again, just to prove that this is real, we'll create a file, we'll write some data into it. It's a shared file system, so that handles the locking behind the scenes so that multiple people can read and write concurrently within the file system. So we'll write hello world into the file. We'll then read it to make sure that everything was fine. And now what we'll do is we'll switch into the other VM and log in and should be able to read the content of that guest one file that we've just created. So we'll switch. We'll go to .5, which is guest number seven. Log in, give the password. Again, create a mount point directory for that to go to. We've already got the IP and export path already in our clipboard, so we'll just simply paste it again, give the point to the directory. Now we've mounted it yet again. We'll change into it. We should see the guest one file that was created by the other VM. It's there. We can read it and see the hello world statement. We can then write our own message. We can just copy the file over, append to it to know that we're from a different VM. And this is all fairly just standard NFS stuff. So we've created a share. We've mounted it. We can now access it from multiple different VMs. So the second step here is just to go back and you see the file there that was created from guest seven and guest one. So we know that the share is actually accessible by both nodes. Both nodes are able to read and write to it and away we go. So that was a share that already existed within the context of Manila. So now what we're going to do is address this other use case where someone wants to create a new share and attach it to two other VMs. So what we'll do is we'll go back to the shares menu. We'll click on the create share button. We're given a wizard that's pretty typical for Horizon. We'll type in the name of the share, whether we want the protocol to be NFS or SIFS. We'll specify the size of the share that we want created. And then also creating an association with the share network that we've defined. So again, making sure that we have isolation to the correct network where the share will be created and offered versus where the instances reside so that they can connect to it. So we see that it's gone and created. So this is actually then invoking the driver that exists that will handle the logic of actually creating the share. And we see that it's available. We'll then just go in and edit the access rules directly as we did before. Giving the IP addresses of guest six and guest eight. So we'll quickly do that. And again, once we've done this, we'll log into the virtual machines and just make sure that now that we've created this new share that didn't exist before, that we're able to access it and do a simple IO. So one thing to note here is we'll create a different directory, but we'll have to go back to Horizon and get the mount string since this is a separate share. It's going to have a separate IP address and separate export path. You may have noticed before that the actual unique ID for the share is embedded inside the export path. So it's simple to be able to make at least a visual linkage of I created this particular share with this ID. Is it where I've mounted it? So we'll simply cut and paste that in, attach it to the marking data directory. And that amount is successful. We can go into it. Again, we created that as a one gig. And you can see here that we have 1,008 megs of space, a gig of capacity for that particular share. So again, we'll simply just write a file and show that everything is working fine. We'll log into the second node, again using the same mount string from the other VM. And again, we're able to see guest6's file there. And we'll copy it and make it change the text so that it's guest8. And there we go. So the last thing I want to note here in the demo is that you can see on the overview chart that we have a marker for the number of shares that are allocated, the amount of space that those shares consume, as well as the number of share networks. You can set quotas around all these things in the same way that you can around instances and volumes with Cinder and whatnot. So with that, we'll switch over to Greg. And he'll give us an overview of kind of what are the key concepts within Manila. Manila and its key concepts. Manila exposes network shares. So the first key concept is a share. It's an instance of a specific shared file system that's expressed on the filer. The share can have the size, access protocol, share type, and it can be accessed concurrently by multiple users because after all, this is a shared network file system. The share can have access rules set on it. These are share level permissions about who can mount the shares and do things with it. Right now, the rules are based on the IP information. However, work is in progress on making the share accessible by LDAP users and groups. The share can have a presence on a specific network because, remember, the shares are expressed in the tenant space. So it defines a specific L2 and an L3 segment, so a neutral network and the subnet. And right now, the share can only be associated with a single network. One of the important things is to present a security service. This is a tenant's own security service that will allow tenants to create a mapping for the users in that share. You can use LDAP Active Directory, and you can actually, the API allows you to associate a share with multiple security services at once, if your backend supports it, not all backends do. In addition to security service, we have the ability to do snapshots. Right now, they're read-only, and you can create a share from the snapshot as well. Backend, of course, defines which specific filer represents that share, and the driver is a specific piece of code created by the vendors in order to be able to plug in their implementation. The reference Linux driver exists as well as a large number of vendor drivers have already been developed. So let's go quickly through the basic cred operations. There's no surprises there. With the shares, you can create, delete, list the shares. Obviously, you can show the share details, rename the shares, and as any open stack object shares have metadata which you can view and edit. On the access and the network side, you can allow share access, deny share access, like I said before, based either on the sitter information right now or eventually on the identity of the user. And obviously, you can list the accesses so you know what operations you've done. On the networks, the share is associated with a network, so creating a share network means simply associating a share with a specific existing and neutral network. You can delete this association, list them, and you can activate the share network which will have neutral and plug the port in. So next, Ben, we'll talk about the process structure as well as the architecture of Manila. So Greg just introduced a number of concepts. Is there any questions in the room before we go into the architecture? You can use the mic if you want to ask questions. Otherwise, we can save until the end. Okay, so I'm Ben Swartzlander. I mentioned the PTL for Manila. Can you hear me all right? Okay. So if you know the architecture for Cinder, you know the architecture for Manila. It's the same. For those of you that don't, I'll talk through it. Most important thing to know is that Manila is not in the data path. Manila is about command and control, you know, management and automation of shared file systems. There's an API service, which is how the outside world interacts with Manila, REST APIs. There's a scheduler service that makes decisions about where to put things and that has a certain amount of intelligence that's programmable. There's a database that stores the state. There's a message bus that the service is used to communicate with each other. And then there's a bunch of Manila share services, which is where all the magic happens. Each Manila share service has an instance of a driver, which is responsible for translating requests from the user into actual operations on whatever storage backend that driver supports. So while the architecture of Manila is very similar to Cinder, the main difference between Manila and Cinder is the fact that it cares very much about multi-tenancy and networking because it's providing the share all the way from the storage backend through to the guest network. It matters what type of multi-tenancy abstraction you're using within OpenStack. OpenStack supports the concept of flat networking or segmented networking, either through VLANs, GRE tunnels, VXLANs, etc. And the other wrinkle that Manila has to deal with is it's while some storage backends have native support for multi-tenancy, not all of them do. And we don't want to leave anyone out. So Manila has mechanisms to help other backends also support multi-tenancy. And I'll go through all the different scenarios that we support. So if you're using a flat network and using a backend that doesn't support any native multi-tenancy, it's simple. You just use a single instance of the storage controller on the backend and everyone shares the date on the network and it's not particularly secure, but for a private cloud use case this may be perfectly adequate. If you're using a flat network and you have a storage controller which does support multi-tenancy, you get a little bit more separation in that each tenant will have their own instance of the storage server but they'll all be on the same network. So you don't get complete isolation but you get somewhat better isolation. If you're using network segmentation like VLANs within your cloud and you have a storage controller which supports native multi-tenancy, then Manila takes care of actually creating virtual storage server instances, provisioning them IP addresses through Neutron, connecting those storage server instances to the Neutron network and ensuring that you have connectivity all the way through from the storage back in to the tenant network. And then the third case, if you have a segmented network and you're using a storage back in that doesn't have any support for native multi- tenancy, Manila, this doesn't exist yet, this is sort of in a prototype stage. Manila will create a gateway such that there's a bridge VM that sits between the tenant's network and the back end network and it provides access to only the shares that the specific tenant is allowed to see. And that code is based on the generic driver which I'll talk about later. Another difference between Manila and Cinder is the issue of mounting the shares you create. So one of the great things about Cinder is you know you do a Cinder Attach volume just pops up in your VM, that's great. Because Manila is actually providing shared storage all the way from the back end through to the guest and the hypervisor isn't involved. There is a manual step at the end to mount it on the guest and you know that was illustrated during the demo. Obviously this is something that users want to automate. It's something that we're interested in automating. We have a couple of ideas from the community on how to do this. We haven't done anything yet. One idea is integration with cloud init to at boot time get a list of shares that should be connected to this host and automate the mount process at boot time. The problem with that is of course then if you create a new share while the host is running it's still manual. We've considered maybe having a user space daemon that Manila can poke to make it mount new shares as they're created. That has security concerns associated with it and there's a whole issue of having you know maintaining a separate code base for this little daemon. So I'm not sure if that will be an accepted solution. Another possibility is having Manila SSH directly to the boxes and issue the mount commands itself. That works great for Linux not so well for Windows. But you know we'll probably end up implementing some or all of these schemes and we're looking for you know great ideas if anyone has a better solution to this problem. You know join the community contribute we're looking for you know solutions to some of these trickier problems. So I'm going to switch gears and start talking about the drivers. I'll first talk about the generic driver. The generic driver is in many ways the most important driver within Manila because it's it's the one that will be run as part of the CI system. It's the first driver that many users will probably you know experience when they install DevStack and just try it out. It's 100% you know software open source. The way that the generic driver works is that when Manila gets a request to create storage it will create a nova VM that's not owned by the tenant and it will create two network interfaces one connected to the tenant network and one connected to the back end network and then it will create a cinder volume to actually store the data. It'll format it EXT4 and then install it will use NFSD or Samba you know whatever it needs to do to actually provide the share service onto the tenant network and these VMs are reusable so if you ask for multiple shares Manila will only create one VM and then multiple shares can be served from a single VM and all the communication between Manila and the VM is over SSH and the actual glance image that is that runs on the the generic nova VM is a it's customizable you know you can run whatever flavor of Linux you want you can patch it install stuff on it. The Manila team provides a reference implementation of the of a generic driver image that's based on CROS the well-known extremely stripped-down Linux instance that we all use in our DevStack instances as a reference implementation for the generic driver. Any questions about this before I hand it over to Tushar you want to use the mic? Can I create a share and then export it to some instance which is outside of OpenStack say bare metal Linux server running somewhere else? Yeah absolutely so as Bob indicated in the demo when you actually the only thing that Manila cares about is the share network which can't exist inside Neutron well it has to exist inside Neutron but the network doesn't necessarily have to be a nova network it can be a network that contains bare metal and then the access rules are just IP addresses which can be IP addresses of anything doesn't necessarily have to be an instance so the answer is yes. Can you clarify are you said something about using Samba or SMB how does that work? So Manila the goal of Manila is to support all shared file systems you know the two most well-known shared file systems are NFS and SIFs or Samba or SMB there are other shared file systems out there like SFFS like GlusterFS like GPFS that we intend to support all of them and the generic although individual backends won't support all of them for example the NetApp backend won't support GlusterFS it right now the generic backend only supports SIFs and NFS those are the two most common ones you could imagine extensions to support other ones. Yeah so the generic driver basically will spin up NFSD or the Samba demon within that Nova instance based on the protocol choice that was made by the client and but to support those two protocols. Does that answer your question? All right so Tishar oh we have one more. I had one question it looked like the protocol was a property on the share creation as opposed to being something at attachment time so does that mean that that shares can only be attached via one protocol? Yes today today shares are single protocol we know that some backends have support for multi protocol we haven't figured out how to expose that to Manila yet that's a future thing definitely valuable but doesn't exist yet. You mentioned snapshots is that snapshot of the file system or is it a glance based snapshot? It's a snapshot of the whole share so yeah if you have a one gig file system it'll take a snapshot of all the files in that file system. I didn't see it in your list of APIs. The list of APIs wasn't a complete list because that would be very long. Yeah it's not meant to be the definitive documentation of this I mean we certainly have the code there it's more just to give you a sense of this is how it looks and feels it's not dissimilar the create you know the crud style operations as well as these constructs are you know directly accessible over rest over CLI through horizon etc. All right I'll hand it over to Tshara to talk about the cluster driver. Thank you man. Hello everyone I'm here to talk about what a what is Red Hat and Manila doing together and I talk about the cluster driver for Manila. Basically for those of you who don't know what cluster is it's a distributed file system and so as a result we have a lot of interest as you can imagine in Manila and the file sharing business. So what we have today really is a single tenant driver it's called it's a cluster FS driver which uses Gluster NFS which is a user land NFS implementation for Gluster so Gluster natively can export its name space via NFS and that driver is called Gluster NFS and what we're going to do in the future obviously single tenant is not going to cut it so we're going to actually we're working on a multi-tenant driver that uses what is known as NFS Ganesha which is a V4 NFS V4 implementation in user land it's a community project and and so we're going to use that for and it's multi-tenant and so we can use that to provide a multi-tenant driver in the future so that's a quick overview of what we are up to so this is just a quick representation of what it means if you use the current Gluster FS driver most of it I'm not going to belabor the whole thing because I think it fits into exactly what these gentlemen described earlier but just so in terms of what we do on the Gluster side is you can see here in the blue box there's a Gluster Manila volume which is the which is the global namespace that we have exported where NFS and so when you get a Manila create for a marketing share or a sales share we create a subdirectory underneath it and then when the Manila access allow basically allows us to basically provide access to the particular IP addresses that require that share so that's in a nutshell so essentially what you can do is you can go to individual guests and provide access to particular shares similar to what was described earlier so nothing there so again like I said I think the future for us is to provide the multi-tenant Gluster FS driver and that's work in progress as well as to migrate to NFS Indonesia which I talked to you earlier about so with that I'll hand it over to Rob. Okay, I have that thanks so just a bit about a number of the things that we're looking at when it comes to engaging Manila so in our systems of legacy of you know rich set of capabilities associated with shared file system deployment without the NFS or CIFs the intent is to leave none of that behind don't allow Manila to actually abstract this capability so one of things that we haven't talked about yet that we should be a near-term evolution is applying the same sort of contract strong struck that you see in CIFs I'm sorry and sender namely the type construct where you can build a catalog of capabilities and delivered in the context of Manila so that you could compose for example a particular back-end or rather you could create a catalog that says you know give me something that has deduplication compression or others that are on particular media type it's arbitrary in a sense that the ministry can define what the catalog looks like and so all of those sort of capabilities in that prior slide you know the intent on our part is to leave none of it behind make it explicitly accessible when Ben mentioned that the reference or the generic driver is in many ways very very important and that's mostly from a developer centric perspective certainly from a you know ongoing development submission testing via CI Tempest and such yes true but from a deployment perspective I'd argue that one of the implementations that the folks here on this stage of sorts are talking about are going to be almost certainly the more obvious choice for real deployment and then they're going to offer that differentiates capabilities over that kind of lowest common denominator generic driver so you know talking about clustered on tap a little bit of a well frankly I could be very long-winded about the slide so let me just be let's distill it clustered on tap is a scale out operating so storage operating system that exists over many nodes we virtualize the network we virtualize the desk or the condition or the storage containers rather and of course the storage the storage controllers themselves and they can move from one side to the other you can move storage in and out and another aspect as applies to Manila specifically is that you can have supply in multiple different if you will IO engines into a single for example NFS namespace you know a global namespace that applies to all of these individual things what's not your if you will fathers NFS when you apply a paralyzed NFS on top of that then you get a distributed IO characteristic you know how to have this common problems of a fan in and whatnot so these SVMs are measures of of inherent secure multi-tenancy and so when we talk about the Manila implementations that actually already exist full custard on tap namely system NFS with NFS to get paralyzed NFS as an option as well to the extent that your you know clients your hosts that are interacting with middle can support it we we basically map when we create a Manila share specifically to that that multi-tenant construct that that SVM and those network interfaces are secure to that specific storage virtual machine so you have like a strong measure of separation from one tenant to the next you layer that in along with sort of that or that distributed IO characteristic the horizontal scaling characteristics of cluster on top and the ability through the typing mechanism to get at the differentiated set of characteristics where I argue that would might be among the more important options of certainly the other folks here on the stage represented as well and with that MC thanks wrong so MCs implementation Manila actually is from a driver perspective what we decide to do is build a common driver and support each product because we have a multiple file offerings as well via a plug in architecture if you will so the initial implementation of the plug-ins is for our VNX and Iceland product lines with of course more to come in the future as well so if we talk about VNX is VNX is our scale-up offering and from the VNX perspective effectively we are similar construct to storage VM what we call the virtual data mover and today we have a prototype driver available already which isn't contributed yet but we're hoping to change that soon and the prototype driver allows us to specify either a virtual data mover or a physical data mover today along with the network IP address that should be used going forward in the future we plan on implementing some of the share a network share APIs as well and automating the creation of storage VMs or virtual data movers as well as the virtual interfaces required to support that 10 network as well from the EMC Iceland side which is our scale-up platform we basically have a single namespace and what we do there is we basically create a new export and assign a quota to it and that quota is then available as a share and again just like the VNX is already supports the majority of the Manila API although and likewise it uses a host ACL for both SMB and NFS going forward into the future again just like the VNX we're still evolving to the multi-tenancy APIs so what we're going to do there is we're going to use our smart connect and access zone features to facilitate multi-tenancy in Iceland as well as well as we have some QS investigation to do as Rob said you know once we get further along with volume types hopefully we will leverage some of that stuff as well so with that I'll pass it on to I am Dean Hildebrand at IBM Research so GPFS is IBM scale-out file system it's used in numerous products that IBM sells but for here we're just focusing on the the actual core file system we can support multiple back ends and we really like Manila as a way to well as something that we see is missing across all virtualization solutions today but also as a way to get our customers you know into OpenStack and you know moving them from bare metal and into VMs or the mix of the two we would like to support both the clustered NFS solution that we have with GPFS which includes PNFS as well as well as doing native GPFS support in Manila you know and then by enabling this you can take advantage of all the features that we can today but also give users a way to manage and use the storage that we have for all available we also have a cinder driver today and we also see this is yet another one of the features that then can be tacked on to this same data plane that users can use as we keep adding these features we have a prototype available today that works with NFS it's also not yet available but something that we want to contribute as soon as we return home and get back to real work. All right great thanks D. So you know in terms of where the project is today you know we're we've been to the technical committee once to talk about what it would take to get the official incubation tag put up behind Manila you know we went through a few list of items I think we've largely completed those so you should expect that to that official status to to be denoted with Manila during the Juno timeframe. We have a at least from a net app perspective I mean I can't have a customer conversation these days where Manila does not come up I know that that's very similar with EMC and others so a large number of POCs we have some that are already occurring today a long list of people that want to get started with it so certainly are trying to gather that that user input to affect the development process make sure that we understand what the right priorities of items are. Currently today what we're thinking about focusing in the Juno release as Ben had mentioned the notion of a gateway based multi-tenancy to align perhaps differing approaches to network segmentation as well as the multi- tenancy support that a particular storage vendor may supply so making sure that we have the ability to map those together. The notion of the type support that was discussed prior so you could create a catalog of share services. The notion of supporting multiple different vendor backends and subsequently vendor drivers within a particular deployment of Manila it's something that's works in center today I think there's a little bit of work left to be done to get that enabled and fully vetted within Manila and then we're also going to continue to investigate the automation of mounting shares as Ben went through before so that's kind of the depth approach also we're seeing a breadth in terms of the different integration points that that are being explored so more vendors more backends and more protocols so the notions of Cepha Fes GPFS other other shared file systems are being kicked around in discussions and meetings and we expect to see those mature here in the Juno time frame. So with that obviously we would encourage you to get involved with Manila if it's of interest to you as we mentioned before we've got you know this is already live on Stackforge it's developed under the normal open stack processes so there's nothing unusual the demo that I built was actually on dev on a dev stack instance if you go to the Manila wiki that I've linked here on the chart there's a new page called ice house dev stack that lists out the instruction so it's very simple for you to integrate this into your local com for local RC run stack that sh and you have the actual horizon and command line and API endpoints that we've talked about today so very easy to get started we actually even have an IRC channel where several of us sit in in that channel regularly as well as having weekly meetings so more you know the typical open stack process for developing a project that's what we're following so nothing unusual there this afternoon we're actually having a design session due to space issues we're actually going to not be in the world Congress building will be up the street a little bit at stat sports bar they have a private room that we've reserved so from four to five today happy to have folks come in from a development perspective we're going to be talking about some technical issues around Juno as well as some blueprints so want to make sure that folks are aware of that that session and if you're interested to come feel free to join so with that we've got a few minutes left for questions happy to take those yeah if you do have questions please come up to the mic so so everyone can hear so for a story system which have both the block and file support is it possible to have a common driver which support for center and manila or we need to have separate drivers so you can't accept presently okay did you know that let me defer to the ptl and short on it so the short answer is is not today so because of the commonality between center miller see down the road emerging of that kind today the code bases are separate is what's been saying I think it's not working but that's an immense topic unto itself so yes in the here now separate you could you could certainly go a long way down that path to this notion of a single object and duality of access to it but that's not what we're at in the here now completely separate same different capacity pools or maybe even the same capacity pool vented in different ways yes but in terms of you know playing control plane into it and access into it different and volume type will be called a volume type or share type I mean what's the moment what do you think sounds like a share type me but yeah I think it's all time yeah so you know right now I think it's the code says volume type if you have an opinion feel free to bring that to the to the IRC channel to you know write a blueprint so there's been some discussion on that in the community today but again it's a it's a it's a not a dictatorship right it's a community so if you have a different proposal I mean if I go to sender and I see volume type and if I go to Manila and see the same volume type but they are different it will be quite confusing yeah I there are lots of us that agree so thanks hi are there any plans to integrate some form of access control other than by IP address like certificates or any more generalized security infrastructure yeah sure so Ben do you want to talk about that well so because Manila connects the the file system all the way through from the controller to the to the user all the regular file system security protocols apply so you know if you're using SIFs and you have an active directory domain you know the accesses is through active directory and the you as the tenants supply the active direct your active directory parameters to Manila so that when it creates a SIF share for you this that server is part of your active directory domain so it's all it's all controlled per tenant and you have all that all the security the same would be true with NFS so if you're using Kerberos security in it with NFS all you have to do is tell Manila the IP address of your Kerberos server the IP address of your LDAP server all the user names and passwords that it needs to know so they can join that domain and then you can use Kerberos NFS and have secure access that way yeah so that was a concept of security service we didn't touch on it in the demo but that's that's the object where you would define those relationships for active directory LDAP and Kerber and another question you know you showed that the user actually needs to do things one they need to list the IP addresses of the clients the second thing it needs to mount the targets from the client now the IP addresses could be pretty dynamic you could use DHCP in Neutron or things like that so why not do attach you know from the VM side attach a share and then automatically figure out the address maybe the address haven't been even allocated yet because once the instance will go up it will be allocated so that that would be integration with Nova which hasn't been done yet but obviously we can the second thing of expert is simpler because you know you basically do the same thing in all shares with a fixed address right so pretty good part is the IP address of the client indexes control that should be automated probably yeah the the intention is that you know Minola can can provide shares to Nova instances and to other things so in the case where what you're sharing with the Nova instance we would want to provide that kind of integration and that would be changes to the Nova project to enable that to specify the network because in Nova you would have an interface so once you do attach from maybe potentially an interface you could actually know also the network right I mean an instance can have multiple attachments to multiple networks as well and then in case what where do you want the plumbing to be done so so I think it's a great comment in terms of could you specify an instance ID versus an IP address initially I think probably a variety of options might make sense there yeah there's an issue with the order of operations you have to create the share before you can attach it and at the time you create it the network that it's going to exist on matters when you're scheduling it so if you know the instance ahead of time before you can create it maybe you can feed that in but if you create if you create a share with with no network attached to it then we won't know where to put it quite frankly when you define a VM you don't necessarily instantiate it in the same point so you have to have sort of an event notification when the instance actually goes up and gets the IP I think this is from a user perspective that's really the problem you're expecting to put the IP address which wasn't even defined yet I think you have some valid points and we welcome you to the Manila development community and we'll be happy to discuss in greater detail at four o'clock at stats and just wanted to offer a lot of time for some of the folks is Manila or will Manila be compatible with Seth so the we've talked to the Seth guys and that we intend to support Seth alongside all the other protocols so far there hasn't been any participation in the community from them but they've expressed an intent to do so and we want to welcome them okay thank you very much so one question so apart from the VM instances themselves do you see any other consumers of the Manila shares like some other services Sahara anything else yeah absolutely so um you know Cinder can actually be used outside of the context of Open Stack and the the intention is for Manila to be usable outside the context of Open Stack if you want to you can you can use it to vend storage to anything that there's a model where you can actually run Cinder on top of Manila if you're using an NFS based Cinder driver you could use Manila to create the shares then put Cinder on top of that and use that to create volumes which is sort of the opposite of the generic driver use case you could use it to provide storage to databases or to yeah Sahara yeah think of defining a heat template for for a traditional Oracle workload right where I may have an NFS share for the data is an NFS share for the logs etc etc etc right so I could define that actually as a heat template connecting that in so you could pull multiple shares and have it all be you know deployed by that click in the button to create that stack how do you see the integration points with heat going if someone wants to like allocate a cluster and a share and all that stuff all at one point I'm sorry I didn't quite catch the first part of the question so how would you how would you allocate shares in a heat template down the road and I know I know it's not there yet but how would you see how do you see integration with heat going working I mean it at least in some concept very similar to what center is done right and it's at the outset it's not too different certainly the networking component would have to be specified to make sure that the linkage would occur correctly but the premise of I want a share I want this protocol or this share type and this size and have that directly mapped to the correct instance IDs perhaps the the question before around specifying the IP addresses that could all be automated by heat underneath the cover so that you don't have to worry about it so I would that's how I would envision it initially would look very similar to sender perhaps some of those orchestrations of the subsequent access rules and security services would just be automatically done so with the generic driver I saw that you guys spin up another VM to create that to talk to sender and then connect to the network yep so that seems like that will be I guess the single point of failure or choke point so how are you guys envisioning how to handle a large amount of bandwidth say that has to go through that yeah I think that goes back to a point Rob Rob made earlier right it's a the reference implementation is a way to get started to understand how it works it's not necessarily what we would recommend to put into the point you know into production in the sense of obviously it's a single point of failure single VM no load balancer not optimized from a throughput perspective at all more just to instantiate the concept I mean you look at sender there's a you know a reference implementation that exists there some people use it by default some people don't but it's more meant to just be a way to use Manila by itself with that without having to purchase or deploy you know a vendor product here in order to access it if that makes sense yeah that makes sense it's just if we want to keep I mean something we can really look at instead of just deploying a single VM perhaps we could deploy a heat template that would give you that h.a. that load balancing auto scale all those properties so it's something we could certainly talk about the way it exists today you know keep it simple when you start so it's a single VM of the center volume and the center volume matches the size of the share that you wanted so it's a fairly clean mapping but not meant to be you know ready for production awesome thank you sure so I noticed the Nova compute that spun up in the reference implementation is not put in the tenant that creates it so where does that go it goes into a particular service tenant so the same tenant that the actual Manila service is registered within Keystone yeah so I didn't cover this during the talk that both the the Nova VM that's created by the generic driver and all the cinder volumes that are created by it are not owned by the tenant both so that you don't accidentally delete them you don't see them in your you don't see them in your in your lists and but more importantly they don't count against your quotas right because you have separate quotas for shares and presumably separate billing if you're doing some sort of a charge back so you don't you don't want to double charge people for the VMs and the storage that are being consumed by Manila if you're using the generic driver all right any other any other questions from folks all right one thing to remind you we do have a survey on the tables that we'd appreciate if you guys would fill out just to give us some feedback and just drop those off or hand them to folks as you walk out the door would really appreciate folks filling those out and thanks for coming