 Hello. Good afternoon, everyone. Thanks very much for making it today. I wanted to start off this presentation here, which is going to cover storage on Ubuntu. This presentation is split in two pieces. I'm going to start talking about storage in general and what's new relating to storage in Ubuntu 16.04, the latest release which just came out last week. And we're going to have guests, Deutsche Telekom, come up here and talk about what their implementation of Ubuntu plus OpenStack plus their own technologies is coming into production. So, to start out with, I want to just give you some background. I'm a canonical veteran. I've been here, well, at Canonical for 11 years since Canonical was not even called Canonical. We were called NoNameYet.com. And so I've seen the company come in through a lot of different changes. And I've always put in charge of interesting projects that are sort of on the boundary there, things that are becoming interesting to the mainstream. So I started Canonical and worked on Launchpad, which came at the time when we're starting to think about distributed version control and how do we enable collaboration at scale and which became the foundation for OpenStack development. More recently, I've worked for three years as VP of engineer at Linaro, bootstrapping that organization who's focused on Linux on ARM. And more recently, I was brought back into Canonical to work on our storage product. And Mark came in and said, look, storage will become one of these things that is going to define what IT, the change in IT that's coming. And I was interested in that and started looking into it, right? Now, I'm not from a storage background, but I find storage to be something which is inherently easy to fall in love with. Because storage is the one thing that if you're an IT provider, giving service, you know, delivering service to your own customers, it's the thing which everybody loves and everybody could use more of. So if you're only providing one cloud base or cloud style product today in terms of IT to your organization, provide storage because every application, every user can consume more of it. So if you think about delivering storage, there's really only one important requirement and that is price, right? Yes, people will say there needs to be, storage needs to be high performance, trade needs to be high performance, but at what cost? So the important thing is getting the storage price right. But even that is not exactly true. So if you are delivering storage services in 2012, it's not just getting the price right, it's beating the AWS price. This is what's going to steal your own IT budget in the coming years because if you're slow at providing storage, or if you're providing it too expensively, then your customer is going to say, instead of buying from my own IT department, I'm going to set up a credit card and start buying it off AWS. So that's really what the challenge is today. If you're providing in-house storage as a product for your own organization, remember this. You're going to be benchmarked against AWS prices, whether you like it or not. It won't be an immediate transformation. People are not going to just drop your Sands and NASs and go wholesale to cloud storage, but there will be a trend in there and it will be eroded over time. So let's look at beating AWS price. This is a comparison between disk drive prices, which is really, really like the atomic unit that you need to have if you're providing storage at all with what AWS rents that for. So if you look at disk costs and AWS costs, there's a multiplier there, right? And this is looking at the block storage side of it. So on block storage, you can see there's like, I don't know, 7 to 30x multipliers there. And if you look at object storage, and you can argue that Glacier is not disk, and Glacier, who knows what exactly it is. Is it tape? Is it optical? Is it a mix? So if you compare an object storage, the gap there also exists, right? Now, this is a bit of a false dichotomy, right? Because on one side, I'm saying this is what the costs of a single disk drive, and this is what a cost to provide storage service. But it's important dichotomy because if you're looking on the left-hand side here, then you're saying whatever I put on top of this is fat that's going to eat into my margin. So if I need to triple replicate, which is the industry standard, then already I have 2x fat on top of that original cost. Now, there's one thing which is helpful for somebody who's providing on-premises storage, which is AWS puts in the fine print that it will charge you more if you try and pull your data out. So this is an important additional cost to factor in when you're doing the comparison there. But fundamentally, you need to do that comparison yourself when you're providing your own storage internally. All right. So there is the multiplier there, and it's not trivial to be AWS, but it's definitely possible, right? It's possible because you can, at the end of the day, attach one drive or three drives, and you get 3x replication to an existing cluster, and you still haven't lost to Amazon. But in the long run, you have to think about how do I expand, how do I actually make my platform available at scale at a competing cost? So my message is push for the lowest overhead per disk possible. Look at the disk costs and look at what overhead you're putting on top of that. The smaller it can be, the better off you'll be in the comparison. Okay. So putting storage on a diet in practical terms means two things. First, if you're working with appliances because you have high performance, high IOPS, or legacy storage that's there, well, remember, use your appliances wisely because they need to be used at the places which actually demand standard and as-grade performance. And if you use these for archival, then you're paying too much for something which should at scale be going to a cheaper option. And you should look at software-defined because software-defined, as we've seen happen in the compute space, will come into storage in the same way, changing the way people think about how to allocate storage, how to deploy storage. Increasingly, archival is now considered to be software-defined in the leading organizations out there and it's the only cost-effective way to put the minimum fat on top of the per disk price. At Ubuntu, I'm talking about the cost point because at Canonical and what Ubuntu stands for when we're talking about private cloud is making your own IT cost-effective against the public cloud. How do you survive in a world of AWS, Google, Microsoft giants? It's only if you really look at your costs and make sure that you've got the right balance there. So first, full automation. If you want to compete with anyone who's doing automation at Google scale, you have to be as automated in-house. So the work that we've done around automation tooling, with Maz for bare metal, with Juju for the actual modeling and deployment of the software there and for the management on top of it, that's required if you're going to provide cost-effective storage. You can't just rely on imaging the systems yourself, using your own hackery-whackery to get the stuff up and running. You can't do that. It has to be fully deployed, automated out of the box. Add 10 more nodes, you put 10 more nodes in, and the system has to auto-reconfigure. This is what you get when you're using Juju and Maz, and I hope that's what you're aiming for internally as well. Second, if you're running OpenStack and you're going software-defined, then why not put storage on the same servers as you're running your compute? You save on storage servers already. There's less fats per disk there. In Canonical's reference architecture, what we deploy for our OpenStack customers, we put Nova and Steph, if you're using Steph as your software-defined storage, on the same node. Why do we do that? First, because we can, because the performance today and because the containers provide you with that guarantee that it will run stably. Second, because you avoid having to worry about the storage cluster you need to put alongside your OpenStack. Put the two together, it makes sense. Erase your coding with smart caching. Yes, you may think that you need to eat into the 3x replica because that's what's required for your application. But if erasure coding is at all possible, explore that erasure coding and perhaps a fast caching layer would splash on the front to make it cost-effective. You need to measure, not every application will accept this, but there are applications which will accept this, and all of a sudden you went from 3x to potentially less than 1.5x. And finally, operational cost is also running cost of your cluster. So looking at low-power architecture, canonical with Ubuntu provides support for all architectures that are important in the data center today, including ARM, which can be deployed in a low-power configuration. So look at that as an interesting alternative. Storage for ARM is very interesting because there's no question around binary compatibility, right? Storage is storage. You either consume it as block or as a file with SMB or NFS shares or as object through a web API. So regardless of what you're using there, an alternative architecture on the bottom makes no difference at all. Okay, so keep in mind this. When we did everything around the storage product launch, our core message was around pricing, and we were saying, make sure that you are competitive against public cloud pricing. I won't go into detail around what the offering is, but if you're interested in talking about support for your software-defined storage, come and talk to me after the presentation and I want to cover it in detail. This is an array of our headline customers for storage. We have many more storage customers out there, but these are ones that our headline customers agreed to say, we're proud to use Ubuntu Advantage supporting our storage. And these are technologies that we're supporting. Scale.io and Coalbyte are new for 2016. You'll see them come online this year. We'll do announcements this year at the right events for these ones coming in. Coalbyte is very interesting, file-based, so shared file storage. It comes from the team that's pioneered ExtremeFS. And EMC Scale.io, everybody has seen this in Randy's blog post. It's basically a block software-defined storage system that screams is how they portray it. Okay. So I just want to use the tail end of my 20 minutes to talk about the new technology that's coming in in 1604. So if we're talking about putting storage on a diet in general, what's helpful in new dieting technology that's coming in with the new release? So first, and we touched on this this morning, that we for the first time are delivering ZFS as a production-ready file system that you can use on any system out there. This is much more around looking at system or single-node configurations than it is looking at a storage service. If you want to build a storage service on ZFS, you're going to have to put a lot on top of that. But if you're looking at a single system, ZFS is a very interesting storage alternative there. It's the first time that anyone's ever done this in a commercial grade or supported setting, ZFS on Linux. So I think it's a big step for everyone. But it's bringing into being on Linux an incredibly solid and performant file system. It's something which has seen many years of production use and as much as people are concerned around the controversy around whether ZFS on Linux is allowed at all or not, I think there's enough standing existing today to say that AFS has been allowed on Linux for a long time, NVIDIA is allowed on Linux for a long time, and I'm sure that ZFS was not developed inside Linux. So anyway, side-stepping the controversy for the moment, if you wanted the questions, you can ask me more detail. I have lots of opinions on this. It's an incredibly solid file system that brings features into Linux that no other file system has today. So the first feature, really, with Ubuntu and ZFS is that on Ubuntu 16.04, ZFS just works. You don't have to do any magic voodoo, install compiler, DKMS craziness. The module is there. Just mod probe ZFS, and you'll see that it's loaded, and the tools will transparently load it in the background if you're using ZFS anyway. ZFS, for those that are less familiar with it, has a couple of interesting features. The headline one, I think, is data integrity built in. Not only do you have block-level check-summing, but because the blocks in ZFS, like in ButterFS, are laid out as a tree, you have check-summing on the tree level, which says every tree node has a check-sum of all the children that are attached to that tree node, which means that you are safe even if data moves around on the disk. ZFS is interesting because it's got a volume manager built in. So you create basically a ZFS pool, and that pool is where you define new block devices that you're offering for consumption, but you can also define multiple backing devices. So you can use ZFS to substitute software raid or even hardware raid, if you don't care about the hardware part of that, and you can use it, like LVM, to create additional block devices that you make available on top. The volume management piece of it is very cool because it has thin provisioning and snapshots, something which we use heavily in LexD, and if you were here in the morning, there was a ZFS presentation where he showed off just starting up LexD containers on top of ZFS and how fast it was and restoring from snapshots. So that's really one of the reasons why we looked at ZFS as being something that we could bring in. And finally, it provides you with this optional block level de-duplication and compression. So with ZFS, you can say turn on compression transparently and the CPU will compress before data is stored on disk. And you can turn on de-duplication on the block level, which will make ZFS basically track blocks that are being written and to see whether or not that block is already stored somewhere. Both of those have important trade-offs, which is why they're optional, but fundamentally, they provide you with additional flexibility if you're doing a deployment there. Anyway, and as I said, under the hood, the root FS for LexD containers. So if you do LexD in it, it will ask you if you want to use ZFS, and if you do want to use ZFS, then it will basically turn on all the magic knobs underneath for you. Ceph Joule. So the latest release of Ceph is now included with 16.04 and fully supported. The headline features here are Ceph FS, something that we've been waiting for a long time. I think Sage said that he started Ceph for Ceph FS, so it's great to see that finally come into production. And we're actually actively working with customers on Ceph FS now. Better support for a jib distributor or multi-cluster setup. So if you've got multiple Ceph sites and you're streaming across them, either block or object, then that's really been improved in the newest release of Ceph. And just in general, better object storage, API coverage and compatibility. And this, I think, is an important change because when people are looking at Ceph and comparing that against OpenStack Swift and S3, they often say, well, I won't use Ceph because the S3 compatibility or the Swift compatibility APIs are pretty thin. So this release, I think, is an important change in that. It's much better now. We're providing to customers an experimental dashboard that we've been working on that basically tracks per OSD performance. So it tells you what effectively the OSD is doing, if it's taking a long time to write, if it's backed up, what all the OSDs on the cluster are looking like. And I think most importantly for anyone who runs software-defined storage at scale, it tells you who your busiest clients are and which clients are doing the smallest writes. And small writes for anyone who's managed software-defined storage is the performance killer. So this tool to help you, when you're having a performance issue, look at the dashboard and figure out, okay, who exactly is causing me a problem and who is suffering because of it. This is a Ceph support matrix for 16, well, basically for the two LTSs here. This just graph basically just tells you there will be one version refreshed within the 1604 LTS life cycle. For those of you that are not familiar with it, we basically make multiple versions of certain applications that are developed on the LTS releases of Ubuntu. And so if you're on 1404, we released with Firefly, but you could install if you wanted to hammer on top. And now with 1604, we're releasing with Juul, but we will offer the L version on top as well. Okay, Swift 270. So first, people often come to me and say, like, what is great about Swift? The great thing about OpenStack Swift is it is a very simple system. You can sit down in an afternoon, understand from top to bottom how it works. So I think in many ways, simple is very beautiful, and there is a more beautiful object store than Swift. I don't think there is. So headline features for 270. Finally, Erasure Coding, which has sort of been beta or not fully production supported for a long time, is now fully supported. And when I say fully supported, it means you can buy Ubuntu Advantage from Canonical, and we will support customers that are using it. There's an interesting change with Swift 270, which is it now provides concurrent gets across the clusters. You can turn this on, you can have the proxy request multiple gets to the backend object storage nodes, and it will return back the first response it got. So this lets you basically sort of load balance performance across the cluster, because you're no longer dependent on waiting for a specific object server to reply to you. Now, the fastest object server that holds a copy of the data you're asking will reply back, and that's the response that you're going to get. Again, another thing which is interesting about Swift 270, improved Swift 3 compatibility middleware, which is often a question. Does the Swift work with, I don't know, Veeam, or Avere, or Veritas? So improved S3 coverage in general means that there are boxes that you need to tick for those vendors there, and that there are more boxes ticked with Swift 270. And finally, some improvements in the API for manipulating SLOs, static objects. These are actually really useful in, at least from what I've seen, in our media customers who are storing in Swift lots and lots of blobs of 4K video, and there could they come to us and say, hey, when is Swift going to get better at that? Well, 27 also brings lots of improvements there. Okay, and finally, the last piece, and I want to do this as a lead-in to Deutsche Telecom, who has done a lot of work on OpenStack Manila, is OpenStack Manila is now finally getting to the point where customers are starting to use this in production or experimenting with it and setting it up to hook up existing NAS or NFS deployments out there into an OpenStack there. So Manila basically makes it so that you have the same first-class experience that you had with Cinder for a block with shared file NASs. So effectively, if you've got a NAS out there and you'd like to get that NAS consumed inside your OpenStack or make that NAS available inside your OpenStack, Manila lets you do things like create new volumes, assign those volumes to a guest, allow a guest access to a volume, which I think are the core things that you need to do when you're setting up OpenStack and you want to consume existing NAS storage out there. Now, we're actively working with people on lead Manila implementations and because as with any of these OpenStack integration points, Cinder, Neutron, Manila is one of those places where vendor interest and vendor involvement is important because you need to get the driver's rights as well as the top-level API. And so, Deutsche Telekom going to step down and Deutsche Telekom, I'd like to have them come up here and tell us more about what they're doing with OpenStack Manila. Thanks. So, my name is Marc Fiedler. I'm from the Deutsche Telekom from the German organization. Today, we would like to give you an update about our activities in... our activities to bring Manila in our environment and production. So, for anyone who was also on the Vancouver Summit, so we had a talk and called us Phase 1. Now, we spend a lot of time to adopt our findings and new features to bring this in a new stage and so that's the reason why we take here Phase 2. Today, I would like to or we would like to talk a little about our collaboration model. Then, we look a little bit deeper in our shared five-storage technologies, what we have done, what was our results of that. We would like to talk a little bit about our environment. I think we changed this massively to the Phase 1 and later on, we give an overview about what is working, what is new and what is our future work. So, that is the purpose of our agenda for today. At first, I would like to give you a short overview about Deutsche Telekom profile. I think probably a lot of people don't really know about Deutsche Telekom, so that's the reason for that. We have worldwide around 150 million mobile customers. I think that's impressive dates and also in the fixed network, around 30 million customers and also in the broadband customers in 80 million. The reason for that is why I mentioned this, is that we have in all three sectors of fixed mobile and online services, we would like to push massively our NFV targets and our NFV technologies and that's the reason why we developed in Germany the global architecture for our NFV purpose and also how we can integrate our shared five-storage in this technology. To look a little bit deeper about Germany and our purpose there, I think we are from Germany and that's the reason why I mentioned this here, is to see where we are and also how many million customers we have there. We are the biggest VDSL provider in Germany and also have impressive data of 2.7 million IPTV customers. We nearly launched a new platform in Germany, the IPTV platform, so there's a lot of stuff who we must like to manage. Now we come to the Manila and that's why we are here. Our requirements are nearly the same as in Phase 1. We have a strong demand of NFV and a lot of technologies to optimize our tackle processes. I think that is one of the main reasons why we push these activities here and this group and have organized a big collaboration with different companies and need a maximum of degree to automate our infrastructure. I think that is to optimize our processes, I think that's the common need of that. What is new, what we have changed in Phase 2 is nearly that we adopted our results from the Phase 1. We take the lessons learned phase and looked at what we must change in Phase 2 to become more results and become more findings about what can be achieved when we are going to a multi-storage backend. That means when we joined additional vendors and also put them in a physical environment. In the first phase we had only a virtual environment that is mainly dead and now we are joined Canonical and also HDS in our group. That's the new vendors, Phase 2, Phase 1. We are also focusing mainly on the high availability that means we had built up an open stack environment physically on HA and also put them in the manila on HA. The partnering model I think as you can see there for us as DT is really important to collaborate with our partners to bring our needs to the community and spread them over in the technical business. I think that's the need and it's really nice that we have these five companies here and I would like to shortly introduce the colleagues here on the stage to my right hand is Matthias Walcik from HDS. Then we have Kapila Roha from NetApp Christian V from the German SVA system integrator and Thomas Neuburger also from Deutsche Telekom. Now I would like to hand over to Christian. Yeah, thank you Mark. Let us just quickly talk about the reason why we are following up with Manila. So therefore I would quickly like to recall the features and the key concepts. So within the possible implementation flavors of cloud storage there's basically block storage, shared file services and also object storage. Therefore, Cinder for example provides block storage that you can attach to one instance and that can format a file system on it and do whatever it wants to. You also have Swift which is the scalable object storage. The problem with that is not every application is capable today to use that and there comes the shared file services in place and that is the fact why we deal with Manila actually. Regarding the key concepts we have obviously the shares which is for example NFS or CIFS. Since we talk about a shared environment we have the access rules which we can use to allow access for different IPs that at the end can be tenants. To tie the consumers and the actual share provider together we have the so called share network in Manila which enables that. If you want to deploy more advanced security services you could also do that like if you want to create a standardized infrastructure or use Active Directory services and you could also use advanced backend functionality like snapshots or thin provisioning and if we look about how that is implemented that is always about a so called driver and the driver then makes use of the features that the backend can provide. One key thing to know is that Manila is not in the actual data path so it acts just as a controller for the actual shared file service. With that I would just like to hand over to Thomas since he just shows about the evaluation lab. Hi. For the evaluation we started to set up a test lab. For our test lab the goal was to set up the whole open stack not only on virtual test environments so we used complete hardware setup with I think around about 26 servers which were running open stack and also were installed with Canonicals HA mode for the control services of open stack. Also we had two storage boxes one from NetApp and one from HDS so the Manila share service was running in a multi backend mode so we were able to create shares on the NetApp box and also on the HDS storage system. Therefore we used different drivers for Manila which were capable to manage the storage systems. As you see on the upper left side there were three LXD containers which were hosting the Manila API and also Manila Shaddler and Manila share service and the IP address for the Manila API is in a cluster mode established by pacemaker so when there is a breakdown of one LXD container the API switch overs to another container and the API will still be running. So what was the evaluation? What was the next steps after setting up that environment? We did scenario tests also automated with Rally. Rally is not only testing functionality but it is also testing market mode so when you look at so that JSON file that is an example of a test Manila create and delete share as it name said it creates and delete shares the share protocol is NFS the runner from the task runner from Rally starts five threads and creates in summary 50 shares and deletes them with a concurrency of that five threads from the Rally runner that enabled us to not only test functionality so also to test how good is the API responding and also to see if we have breakdown is the HA set up working and do we see any slower API calls while the cluster is fail over so then we go back to Christian so maybe we could just explain that with a short demo or some high level graphs so just to whoa so just to have a look on this slide you can see we have our Manila engine and below that we have our different back ends which are and what we do now with Rally is we measure the time that it takes to create shares in parallel so we are just hammering the Manila API and therefore the backend storage in creating shares and the same goes with deleting this is one simple example and let's just have a look at the short demo video hopefully that you can see it's on the upper left we have the actual Rally test run and on the lower left we just have a watch that shows the output of Manila list every two seconds and on the right we display the volumes that are available on the NetApp storage system and as you can see when Rally starts you see it creates actually Manila shares and since it is not fast enough it shows that it is deleting and created and on the right side we can also see the volumes that are created on the NetApp box for example and at the end then Rally is able to tell us how long the actual tests it takes so for example how long did the longest share create or the fastest need and this is a very nice tool to just stress the whole environment and therefore get a lot of information about how that would work in a enterprise workload when you have thousands of clients that request shares and delete and so on so to continue with the analysis we had that in phase one what is working and what can be improved so it is very nice that we have now the multi backend in place we have tested it with physics and that is very nice we have our scenario tests with Rally that can leverage different storage backends and also we could do some performance testing according to the API so it is never about storage performance it is always about Manila scheduler and API performance and also very nice is that we were able to implement a HA version of Manila or of our OpenStack cloud and we also were able to use HEAT that was one of the things we missed in phase one another thing that is very nice that had improved over time is the vendor documentation so for example if you look at NetApp or HDS they have nice information in place that documents the driver and its functions from our point of view it would be nice if there would be some single point of contact for documentation so it would be nice to do some research at HDS or in the Manila wikis and so on so that would be nice to cover that what we discovered when we implemented was also that it was for us not very transparent how Manila worked but that was not related to Manila that was related to us since we established both storage backends and it only reduced the HDS storage for new volumes when we did not specify it debugging we found out that this is related to the fact how the HDS system shows up the capacities instead was all thin provisioned and yeah that was a little bit of a struggle but again that is not related to the vendor or to Manila it was just not too easy to find out and it would also be very nice to get migration we had that in phase one there is a lot going on but I think it will be very hard to migrate for example from HDS to NetApp but we will see what comes now I would like to hand over to the colleagues from the vendors to just continue with the analysis I will briefly talk about the first three use cases what we did we started off with a use case which is basically an extension of the use cases which were done in the first phase it was about share creation but now that the dedicated storage backend could be addressed in the share creation and as Kristin already told we did also Manila driven placement the user driven placement was done by specifying a share type which we created one for Hitachi one for NetApp and it was possible to address the dedicated storage backend with the different share types the second use case was about snapshot handling that is something which was not done by DT in the first phase the snapshot handling worked very well so at the end on the Hitachi NAS a snapshot which was created ended up in a tree clone on the HNAS and the snapshot handling also making the snapshot available as well as deleting the snapshot everything was as expected the third use case we did was about integrating pre-assigned shares into Manila with the Manila Manage the Manila Manage also worked as expected but also there we found a different behavior of the different storage backends in the Hitachi case the export was after the Manila Manage command under the same export pass as before and that was a different behavior what we saw from the other backend but in general all three use cases were working very well the fourth use case that we tested was to consume these shares outside of the scope of OpenStack so we have lots of bare metals and other cloud environments and the use case was that can I use Manila and actually use this platform or this service to consume shares outside of OpenStack and we could easily do that after we set up the networking right and we did this within OpenStack and also we did it manually and the sixth use case that you see here is about resize share so we could easily resize shares, extend them or reduce the size of them and this we could do non-destructively so in case an application is consuming these shares and we resize it, it was able to work properly and there was no disruption yeah so what we have is testing the HA HA mode we set up with Manila API therefore we started a rally test script which massively creates and deletes shares in parallel and while that test run was going on I was the bad admin guy who killed the LXC containers which was hosting the Manila API and the good thing was it worked and the API the active IP address switched over to another LXC container and the API was available again after that switch and also all following create and delete share operations were successful while they failed over, we saw that a small number of great requests failed that's just cause of they were still in work on the failed server from admin perspective it was okay cause we have a clear state of all shares the same state, after the test we had the same state of shares in Manila and also on the storage back ends so we have no difference, there were no yeah interruptions so that was a good news it works really well only need this one, so I go forward so yeah, we're coming to the end and that's time for the conclusion from our side our summary is against to we formalize this in phase one but in the second phase we say it comes enterprise mature as you have seen we have fulfilled all our use cases and our test cases yeah the Manila works quite well in the HAA deployment I think that's really important for our productive environment to see how the OpenStack site and also Manila site is working together and then with the storage boxes behind and the community vendors addressed our requirements I think that's really important our further steps would be to take more enhance performance tests also to test how the environment works if the hardware is broken more or less on the OpenStack site I think the test is not on the cluster side on the storage back ends I think that's known from the last 10-15 years and also actually not covered was the Manila security perspective to integrate with an LDAP and cabaret system I think that's really interesting but also a lot of work so that means that we have some work to do and that's the further steps we will do so yeah thanks for your time and if you have questions in the end we can talk about both both sessions do we have some time for questions, some 5 minutes? yes? okay I'll say yes so any questions for the Deutsche Telecom team or for myself please go ahead you may make this very easy on us go ahead I heard the question but it's actually a question for this to hear sure so the first question is whether there's any support for quotas the users are in placement we have our idea behind this that we have gold silver bronzer models in the storage back ends that means we have cheap storage we have storage with a lot of services behind and I think that's a different model where we can say okay from the perspective of the customer they would like to have the silver model and then we modify the Manila driven placement in the storage and quotas are there any quotas? actually we didn't test it alright there's a question in the back there before that microphone probably works the one that you just walked by I hope maybe Lou? alright I have a question about pacemaker and Corosink are you using unicast or multicast for the heartbeat? I'm curious because I need to bring up Corosink pacemaker thing in our open stack and multicast is an issue and I'm just wondering how you solved that do you know the answer? good question but I don't really know about the colleagues set up so let's take it up offline and I'll come find you after the presentation and then we'll discuss it there any other questions? sure the question is is there any use for is sef being used on top of ZFS is that it? the answer is no by default the sef that we support on Ubuntu is run on XFS on the bottom and you kind of run on AXT4 as well there's two supported file systems that we use underneath sef there's a new file store that's being added with sef but currently it's not used yet it's only come in with the newest version with sef jewel and the other question you asked was about deduplication is that what you said or compression? ah yes so if you do run sef on top of a compressible file system so butterfs or ZFS in the bottom then yes you would be able to get the benefit of block level compression before storing to disk but today no one's really running that in any test of configuration so we're not ready to support it yet we're working into what it means to change that to change the file system underneath sef and to still maintain the SLA guarantees on top any other question? final one? yeah go ahead the question is on sequential workloads on ZFS is that what you said? sequential workloads on ZFS whether we've run them or tested them or have we tested sequential workloads on ZFS yes so largely the ZFS performance for sequential workloads depends on what the configuration of the storage underneath the pool is you're talking about the pathological cases where you do have RAID Zs and you get very bad sequential performance right? no so just keep in mind ZFS will be very new so for instance Swift and sef as we're delivering today are not delivered on top of ZFS so in our reference configurations it will be done on AXE4 or XFS which within Linux have been seen much much wider testing we're bringing in ZFS and if you think about it's any technology as production graded as it may be outside of Linux when you bring it into Linux it will go through its own maturity curve so realistically I expect that within the 1604 times life cycle you'll see ZFS become more and more stable and more mature as the vast majority of users are encountering it for the first time the first time most Linux users will see ZFS at all will be now and so they'll start doing their own Z pools and figuring out what the performance is like and we'll take that into account we're already experienced with ZFS on Linux and are looking for a partner then canonical is there for them and people that are experimenting we're also really willing to encourage that because that's the only way ZFS will get better go ahead do we recommend ZFS for Cassandra or MongoDB in truth we don't have enough experience in production running either of them as I said we're providing ZFS this is the first snapshot people have ever gotten of ZFS on Linux in a supported state so we're working with lead customers on performance characteristics in ways that we state so if you have an issue we will definitely address it performance though is the sort of thing which takes time for us to really get the experience and bake it in is the honest but not so great answer I guess alright I think we're up for time so thanks very much guys thanks