 Can everyone hear me? Is it on? OK, my name is Sage Weil. I'm the Chef Principal Architect Lead Developer. And today, I'm going to talk about how to do containers on top of Chef and a whole bunch of other good stuff. I'd like to thank Hao Mai Wong, who was formerly a United Stack, who helped me put together a lot of the material for this presentation. So just a brief agenda. I'm going to do a little bit of motivation about why we should care about this, why it's interesting. I'm going to talk about specifically how to do block and file and how to plumb the Chef storage to VMs and containers and bare metal. I'll finish up by talking a little bit about container orchestration and why this is important, and then summarize. So first, motivation. So in order to have a compelling cloud offering, we need to offer lots of different options. On the compute side, some people are going to want virtual machines with various high providers. Some people are going to want containers. Some people are going to want bare metal. This is even more true on the storage side. Depending on what applications you're deploying in the cloud, there are lots of different interspaces and types of storage that you may need. Block, obviously, is very important just for running the virtual machine infrastructure itself. But shared file is also very important for lots of different workloads, particularly legacy workloads. Object storage is important for scale and applications and lots of newer applications. But that's not it. There's also key value storage that's needed for lots of different types of deployments, no SQL, SQL, and so forth. And we need to sort of look at all these different options when we're building our cloud infrastructure to figure out how we can make all of them available to our tenants. Why are we interested in containers? First, on the technology side, containers are compelling because they offer a lot of value in terms of performance. So there's a shared kernel. You get a very much faster boot. You have a lower baseline overhead and just running lots of containers on a single machine. And in general, you get better resource sharing between containers so that you have just overall a more efficient computing infrastructure. Specifically, on the storage side, containers also are all compelling. The shared kernel means that you have essentially no IO overhead when passing from the container to the host that's actually doing the real IO for you. And also, in most architectures, you have much smaller container images that they're using to spin up these containers so you can start them up very quickly and efficiently. There's also an emerging container ecosystem that's very compelling. There are lots of container host OSes that have sort of landed in the last couple of years. There's a diatomic project that gives you sort of new style package management that's much nicer sort of bringing us into the new century. Core OS and Snappy Ubuntu are also in the same category. We also see a lot of new app provisioning models where you have lots of different containers for running individual services that are pretty valuable, each having their own sort of standalone execution environment. And even a new open container specification, new Lucule that was announced last week, that's pretty exciting. Tries to sort of bridge the gap between all these different container OSes. Why wouldn't you run a container? Well, there are a few reasons. On the technology side, the security in containers is much weaker because, again, they have a shared kernel. So if you compromise anything, you've sort of rooted the host. And you have much more limited isolation than you can provide by isolating a piece of hardware. It also removes some of your OS flexibility because, again, you have a shared kernel. So there's some OSes that you can't run in containers, obviously. There's also just a lot of social inertia. People are used to deploying on VMs, and they'll continue to do so even after there are better solutions. On the ecosystem, containers are also a bit of a challenge because a lot of the new models for deploying applications in containers don't really capture the legacy infrastructure that we still have to run in a lot of environments. And so you end up with sort of this split brain IT infrastructure where you have the old style stuff, many style stuff, and so forth. Why are we interested in Ceph? Well, Ceph is awesome. It scales horizontally. There are no single points of failure. You've heard all this before. Hardware agnostic, commodity hardware. We try to self-manage whenever possible so that the administrator isn't up all night. It's open source, LGPL. But at a higher level, really the design goal with Ceph was to move beyond the legacy design and purchase that you see in legacy systems, lower systems, to move toward a model where we're talking about client and cluster instead of client server. So clients understand they're talking to a scale out application that's dynamic, or scale out service that's dynamic, and avoids the old ad hoc models of high availability. And as we sort of move, continue into this new century with containers and so forth, we want to sort of continue to think outside the box and figure out what's the best way to plumb storage to our applications. This is a picture everyone's seen probably before. Ceph provides three different storage services currently. There's the RATOS gateway gives you S3 and Swift. RBD gives you a virtualized block device, and CephFS is a distributed shared POSIX file system. I'm going to talk mostly about block and file today. So starting with block storage. First, the existing block storage model is pervasive in our all of our current clouds. VMs are sort of the unit of cloud compute, and block devices plumbed to those VMs are the unit of block storage. They come in two varieties, a frameral and persistent. We'll mostly ignore that for today. Sort of the key thing is that the block devices are single users. So for that given block storage, only a single VM is using it at one time. And if you need shared storage for some application, you're just sort of sent elsewhere. So you might use objects, S3, Swift, something like that. Maybe you need a database for your application and so forth. But it's really not the whole solution. What that actually looks like in today's deployments, people are using KVM, and they're using the RBD driver in KVM that links directly to back to the Ceph cluster. So you have your RATOS cluster here on the bottom, the hypervisor talking to it, Nova's managing the connection, telling, starting at KVM, so it knows to talk to the storage cluster, Stender's managing the actual file volumes. So this is a proven model. It gets good performance, good security. Performance could be better, so we'd like to look at some other options. But it's the thing that everybody does today. The latest survey showed that 44% of the clouds are using RBD, presumably, in this fashion. But that's not the only way to talk to Ceph, Ceph block storage. So there's that libRBD driver. That's linked by KVM, that's what most people are using. There's also a fuse driver that uses that that is experimental, so nobody releases that. But the main thing is that there's also a native kernel driver in the Linux kernel that will map RBD devices directly. So it's stable and well-supported on modern distros and kernels. There's some feature gap relative to the libRBD driver, the two key things being client-side caching and fancy striping, which fancy striping almost nobody uses, but might be important for databases. So most of the time, you don't really need to worry about it. The main thing is that there's some performance delta, although exactly what that delta is is unclear. The kernel implementation is more efficient. It's cleaner C-clode, it's much lower weight and it's all in the kernel, so it's super fast. And so when we do performance tests that are backed by Flash on the server side and we're just trying to cram my app, so we get really good numbers better than we get with libRBD. On the other side, on the other hand, because we don't have that caching layer, we have higher latencies with some workloads, sort of experimentally. So most people run with the libRBD cache enabled. So how exactly that pans out in a particular environment is TBD. But if we start looking at that driver, then we open up a number of different options. So starting with the NOVA container driver, LXE, what could we can do? So the main idea here is that you have your host, your NOVA host, it's gonna map the Cef cluster using the Cef kernel driver. Not libRBD, that should be rbd.co, actually. So the host has the kernel block device and then it just maps that into the container and the container can use it. So on the pro side, this is fast, it's efficient. We just have to implement the existing NOVA API call, called attach and detach to make that work. On the con side, the security's a bit weaker because the container security is just sort of weaker at baseline. And so if you've subverted the container, then whatever. But that's sort of not really a storage issue, I guess. So current standard status, LXE to this maintained and it's deployed by some people but it's not as widely used obviously as the other drivers. And unfortunately, NOVA prototype of this particular scenario but there's nothing preventing it from working. Once we enter into the world of buzzword compliance, then we'll probably wanna look at NOVA Docker. A similar situation, it's almost identical scenario to what we're doing with LXE, where you can again map the kernel file system on the host and then just bind that into the guest. So it's also fast and efficient. The main issue with using the NOVA Docker driver is that the images are different, right? So NOVA Docker is using a different image model, they're Docker images and not block device images. And so the way that you would actually use this in practice and deploy applications, you have to have a whole different set of image libraries and so forth. So it's a little bit different. There's also no prototype, NOVA Docker is out of tree. So whether people would actually do this in practice or wanna go down this road is also unclear. But since we're going from virtual machines to containers, why don't we just continue down this path and see what happens when we start looking at Ironic, which is the bare metal driver for NOVA. Similar model here where you have the bare metal host and you're using the Ceph kernel driver to map the RBD device directly on the host. On the pros side, this gives you great performance and efficiency and there's really no layer in between that can slow you down. So that's great. The other nice thing is that it's again sort of the traditional app deployment model that people are already using with Glance and Cinder and NOVA where their machine images, they just happen to be real machines and average machines. So that's great. On the cons side is that the guest of us suddenly has to support this kernel driver. So you have to use tenant VMs or disk images that actually have modern kernels on them to support this. So they're a little bit more constrained as far as what you can deploy in this situation. The other trick is that you need an agent that's running on that host. It's actually gonna do that configuration of the block device driver so that you can implement the attach command in the NOVA API. And doing something like boot from volume is a little bit tricky because you have to do the all the trickiness when you're actually starting up the VM, the bare-mouth machine. So the good news is that this is something that is a hot topic at the conference. There's a session this afternoon where they're talking about Cinder and Ironic integration. So if you're interested in this, you should definitely attend and raise your hand and they say should we have an agent that does all this and make it work because I think it'd be great. So that's exciting. So sort of in summary, if we look at the block, we have a couple of different options again. You know, using the top one, using libRBD with decent performance, rbrd.co ones, having better using the native block device driver. And the image format will change a little bit but it's essentially there, if we look at the spectrum, we have virtual machines, we have containers and we have bare metal and we can plumb block storage, we ascender to all of those if we just implement the APIs for these different Nova drivers. But block storage is boring, right? It's the volumes are semi-elastic, they're awkward to resize up and once they're resized up, you can't resize them down and they're not shared, which sort of limits their utility for a general purpose cloud deployment. So that brings us to file storage and all the good things you can do with that. So obviously Manila is the file storage service project for OpenStack and it manages file volumes. You can do things like create volumes, share and unshare them. Manila also manages the tenant network connectivity so there's a call that will attach it to the tenant network so you can go mount an NFS share in this case and do things like snapshot management. We're interested in file storage because it gives you sort of the familiar POSIX semantics that all our legacy apps require and you can do things like home directories which are important for lots of people. It also gives you a shared volume so you can have lots of VMs talking to the same storage and interacting through the file system which can be very important. And it's also elastic storage so you can add more files to a directory and then delete them so that the file share can grow and shrink in a much more elastic way without having to explicitly resize the volume up and not down and so forth. The quotas are there only sort of to make sure you have some fences but they're policy and not technology. But there are some caveats, there are some issues with sort of making Manila work because file is just a more complex protocol and it's not something that you abstract at a virtual machine level. So there's the last mile problem. Currently that means we have to connect the storage to the guest network. Currently there are a variety of drivers that Manila is sort of targeting. The main one is Neutron. Luckily that captures I think most people deploying real clouds but if you're using more exotic network architectures in OpenSec then you might have issues. The other issue is the mount problem. So essentially as currently implemented Manila makes it possible for a guest to mount a share. It makes the share available to them and then the guest then has to go on the tenant VM and actually do mounts dash TNFS, blah blah blah. There's ongoing discussion about possibly having an agent or automating that actual mount process. That there was a discussion this morning in fact that unfortunately I missed but I heard about afterwards. So that's encouraging. People are definitely interested in this but there's work to do there. And there are also some current baked in assumptions about what both of these mean and how what the implications are that I'll sort of get to in a moment. So if you sort of look at what you can do with Manila the ways that you can plummet to your actual compute there's one category of drivers that drive sort of legacy appliances these are the NFS servers of the world. And essentially those drivers tell the appliance to export an NFS share. They map an IP on for that appliance into the tenant network. But they're putting my open source knob hat on they're kind of, their clothes proprietary expensive not super interesting for an open source project like OpenStack at least in my mind. But there are lots of these drivers and the main thing is that they essentially punt security to the vendor. So they expose the storage IP to the tenant and so you can talk directly to the filer or whatever and it's up to the filer to make sure that you can subvert the security there. Which is fine, that makes sense. There's also a Ganesha driver which is what sort of people are coalescing around I think is sort of the default I believe. And the model here is that you have a virtual machine a service VM that spun up somewhere in your cloud that runs the NFS Ganesha server. It mounts your storage system by talking to the storage network on one side and then it re-exports via NFS that same storage to the tenant and so it's also attached to the tenant network on the other side. So this is entry it's well supported it's sort of the model that everyone's doing. The challenge is just that the performance isn't quite as good. So if we look at what this means for Seth you would have a KVM virtual machine you would mount via NFS the service virtual machine that's running Ganesha on the tenant network using NFS and then Ganesha has a number of storage drivers built in there's a cluster one there's a stuff with S1 you can actually mount it on the host lots of different things there. And then that talks to the stuff cluster on the backend. So it's a simple existing model it gives good security because the tenant network doesn't have access to the storage network which is good for a public cloud especially. But it has drawbacks too so there's an extra hop you have to go through this tenant VM which means you have higher latency. The service VM becomes a single point of failure you have to have all this other stuff where if it fails you have to spin another one up and you have this gap in service and so forth. And the service VMs also can kind of consume resources you have all your networks sort of passing through your cloud twice and consuming compute to actually do that re-export. But the Ganesha driver exists it's untested with SFS unfortunately so we're working to fix that sort of our next step but that's where that's at. So the next category drivers would be the native drivers so there's already a Gluster native driver the next step here would be to have a Manila Ceph native driver. The idea here is that again you're spinning up KBM virtual machines but you allow the virtual machine to mount SFS directly by letting it talk directly to the storage network. So on the plus side this gives you the best performance you have sort of the tenant VM can talk directly to storage there's no extra hops it's using the native Ceph protocol which is gonna perform much better than NFS will and you have access to the full SFS feature set snapshots and recursive counting and quotas and all that good stuff and it's simple, simple to understand. On the other hand the guest again has some additional requirements it has to have a modern kernel that has a so supported or supportable file system client which probably people should be running anyway so that's not such a big deal. But the main thing is that it exposes the tenant to the stuff cluster so you're suddenly relying on the stuff cluster to be secure and you're relying on all the security features there. And there's another sort of technical challenge where when you mount a SFS volume you have to provide a secret that sort of authorizes you it's not just IP based so you have to somehow deliver that secret to the tenant so that they can call the right mount command and get access to the cluster. So there's no product prototype for this one just yet and the other sort of status is that the SFS isolation and security is a work in progress we have a bunch of interns this summer who are sort of knocking off the issues that are relevant there. But for a trusted environment that might make the most sense. The key challenge though is that the sort of network only model of sort of envisioning Manila providing storage connected to tenants via network is limiting. So the current assumption of it only being NFS and SIFS just sucks because there are other file systems out there that different files and protocols that are better always relying on the guest to actually do the mount kind of sucks because the mount command is different depending on what file system type you're using in the SIF case you have to pass a secret in there and there's other random options that you might be wanting to do. And even just assuming that the storage is connected via the network sucks maybe it's not a network at all maybe it's something that just pass to the hypervisor and because there are other options. So the first one is KVM vertifest 9p. So KVM has this feature where you can pass through a file system from the host to the guest. The guest kernel runs the 9p protocol and there's an embedded 9p server in QMU KVM that sort of talks to you. So it uses vertio for fast data transfer it performs pretty well. It's up-screen, it's not super widely used but it exists and has been around for several years now. If you don't want to use that there's also just plain old NFS you can mount the file system on the host and then export NFS to the guest and then have set up a small private network sort of plumbing between the two that avoids that extra network hop that you have with the service VM. And in the container case there's nothing really to have to do at all you can just mount it on the host and do a bind mount into the guest namespace. So the actual last mile is not a network hop but a namespacing issue in the tenant or the guest kernel. So I think in order to sort of make all of these options make sense in a sort of general way I think we need sort of a new model and a new way to talk about this. So the mount problem is being discussed by the Manila team it's an ongoing issue. In fact it came out this morning. They have a simple prototype using Clarinet that works at least when you first create the VM. I'm not after that unfortunately but it's a start. And they're talking about the possibility of a Manila agent that can actually mount these things on the fly maybe using those a car messaging service might be a good fit. That all sounds reasonable but I have a slightly different proposal. And I think that what we really need to know is this new API command that will attach a file system or detach a file system in the same way that we have a Cinder sort of mapping where you have an attach volume and detach volume. Because the way that you actually attach the file storage to the tenant VM or container or bare metal machine is totally dependent on what the Nova driver is. It's not something that Manila necessarily can understand. And so that API's responsibility would be to plumb that Manila share to the tenant in whatever way is appropriate for that driver. And the open question then is does that also include calling that final mount command or is there still a final step that the tenant needs to do? And that I think yes but it can really go either way. Don't have a strong opinion there. So what would that look like? We can imagine having a KVM virtual machine. We mount the CephFS using the kernel client on the host and then that's passed into Vertifes to the virtual machine using the IP protocol. And so you have that extra hop through the hypervisor back to Ceph. So the nice side, the nice part of this is that it gives you great security. The tenant remains isolated from the storage network. You're sort of boxed into a particular directory by maybe the mouse just needs to be moved. So I want my password, the battery. Oh, oh man. All right. Let's see how quickly we can do this. Okay, that was exciting. So if you don't want to use Vertifes 9p for maybe you're not using modern Linux kernel, you can also do the same thing with NFS where you mount the file system on the host and then NFX export that to the guest. But the downside there is that NFS has weaker consistency and it's also tends to be slower. The 9p thing uses Verdiya and NFS won't. So moving into the container world, sorry, go back one. You can do the same thing with the LXC and Nova Docker drivers and CephFS.ca. So again, you mount it on the host, just bind that directory into the guest namespace. You get the best performance, you get the full CephFS semantics on the file system, but again, you just rely on the container for that security. Sort of moving again onto the bare metal case. Ironic's a similar situation. Again, you're mounting CephFS using the kernel driver on the tenant bare metal machine. You have the requirement that the OS has to support that, but again, you get the best performance in the full feature set. The challenge there is really around if you're relying on CephFS to provide the security still, networking, what that means with Ironic is sort of, actually, I don't even understand that, so that's an issue, or maybe not. But the real challenge is that there needs to be an agent that actually is doing that mount for you so that when you call the attached SIPI, something on that tenant bare metal machine will actually do the mount. The good news is that there's discussions with Cinder and Ironic to create a Cinder driver, and hopefully that will involve an agent, and we can reuse that to also do the file mount too. Next slide. So again, this mount problem is really an issue it keeps coming up. Essentially, the issue is that containers break the assumption that it's a network that's connecting your tenant VM or container to your storage. Mounting becomes driver dependent, and it's harder for the tenant to know what the right thing is to do it, actually make it work. So this Nova Attach API could provide the needed entry point. At KVM, we already have a KEMU guest agent. Ironic does have an agent, but hopefully it will soon. In the container case, it's trivial. We're definitely just doing a bound, a mount-bind to do that. On the other hand, you could also just make the tenant do the final mount. But in order for that to make sense, you need to have some API that the tenant can query to know what the magic mount command is that it needs to do in order to do the mount. Maybe it's mount-dcf, something or other. In the container case, we could bind to sort of a dummy device inside the container namespace, and then that final mount command that the user has to do is a bind mount from the dummy location to the final location. Security is obviously an issue. There's no free lunch there. In the KVM and Ironic case, when you're using the kernel stuff driver, the tenant has access to the stuff network, and so you're relying on stuff security. So there's sort of ongoing work to make that improved, but in high public cloud situations, obviously, that might not be what you actually want. If you're using KVM, and either VertFS 9.3 pass-through or NFS pass-through, you get much better security, but that pass-through is limiting your performance. So it's a trade-off. In the container case, security for the container is weak at baseline, but assuming that's good enough for you and you trust it, then the container is actually locking you into a nice directory, and so that's what you get. So maybe that's just fine. So we took a look at performance. These are numbers that how am I put together to thank him again. I mean, basically have two fast nodes. One of them acting as a stuff server, with some OSTs, another one acting as the client, and we looked at three things. We looked at the VMs natively mounting stuff fast on the server. We looked at the VMs using VertFS 9.p to the host and then mounting the server, and then they're doing the same thing with NFS, where as NFS pass-through to the host, mounting the server. So first we looked at sequential performance on the next slide. Stuff of as native as the best performance. VertFS 9.p for large IO sizes does pretty well, and which is a surprise given what I sort of read on the net. On small IO sizes, it's smaller just because there's some call. It's just called effectively through the VertIO layer and so that's a little slower, but maybe that's not really a big issue. NFS was slower by about 30%. Maybe that's good enough, and you would rather use NFS than 9.p, which is less well supported, maybe not. Sequential read is a similar picture. Everybody does pretty much the same. Interestingly, VertIO does actually, or VertFS actually does better, I think because it's doing more aggressive prefetching than stuff is doing by default, but the read side situation looks pretty good. On random IO, similar picture a little bit different. You could ignore the high spike on the left, that was a miscalculation. These are pretty raw numbers. But again, for small IO sizes, things look pretty good. They're all performed pretty similarly, although when the random lights get big, then they look sequential and you sort of see that stratification again where things aren't quite as fast as stuff is native, but they're not so bad. So on the random reads, the slowest is, well, 9.p and NFS do similarly. Things get kind of spiky, I think that's confusing the read headcode, but the situation isn't so bad using those passers, so that's encouraging. So next slide. So when you try to put all this together, we have sort of a whole laundry list of different options. Sort of the important stratifications are, some of them are using virtual machines, some of them are using gateways, and then you have some of them that are sort of natively mounting using the stuff driver inside the tenant, either on the host or not, or inside the VM. The main one that we didn't unfortunately test the performance of that would have been interesting was to compare this to sort of the traditional Manila model where you have the service VM, the assumption is that it's gonna be slower than when you're doing the NFS pass around the host, but unfortunately don't have a number to show exactly how much slower, but that would be very interesting to look at. We have a prototype that, how am I put together for using that vertifest pass-through through self-FS? It hasn't been proposed to the middle team, whatever, but it exists and it works, and it's actually, I think deployed in production as a former employer. On the Nova Docker case for self-FS, there's also an IBM talk on Thursday that is probably going to be interesting, I believe they're gonna talk about exactly this, too. So, brings us to the last section, what does this all mean for container orchestration? So, first, containers are different. So, for example, Nova Docker implements sort of a Nova view of a Docker container where they're essentially using the container as a mini machine, but not really as a Docker container in that case, because again, this is just sort of an artifact of Nova being an infrastructure as a service API designed for vision machines and not to something that's designed to do application containers, which is fine. That's what it's for and that's what we should use it for. Kubernetes is a new project, that's all the new hotness. It's a higher level abstraction layer, coordination orchestration of containers. It draws on sort of years of experience at Google about what they've learned running containers to make our searches go super fast as a vibrant open source community. So, that's mostly what I'm gonna talk about here, but it's quite exciting. So, in the next, so in the Kubernetes world, there's an effort to make storage, set storage be surfaced to containers. Kubernetes has several different types of drivers. You can just pass through directories from the host to the pods, which are essentially the guest container that's, and then there's sort of two categories of drivers. There are those that take a block device and put a file system on it and then bind that into the container pod, and then there are those that take a shared file system and pass it into the container. So, they're drivers both for setRBD and setVS that are pending during the review. Maybe they've been near-emerge, I think they've mostly sorted out the issues. So, that's all exciting. The only real challenge here is that these drivers expect pre-existing volumes. So, you create all the volumes and then you tell Kubernetes about them and it'll attach them to the pods. So, we really need a REST API that Kubernetes can go off and create them as needed, just sort of an annoying gap, but that's where we're at so far. The goal with all of this is to take Kubernetes and run it on top of OpenStack. So, the idea would be that you would provision Nova virtual machines or Nova somethings. It could be KVM, it could be Ironic on bare metal. You would probably install a guest that is a container OS, something like Atomic or Core OS or Ubuntu Snappy, since we know we're gonna be running containers there. Kubernetes would then be sort of layered on by the tenant across all the VMs that they created, and then you would need to provision the storage devices to attach to those containers using the existing OpenStack services. So, not necessarily those drivers I just mentioned that are gonna be natively supported in Kubernetes, but have a Kubernetes driver that talks to Manila and to Cinder to get the block storage and file storage that it wants. On the status so far, there's a Cinder prototype. Manila one is coming soon. So, what comes next? The first thing is that we need to figure out an Ironic agent so that when we're doing these bare metal machines, we can actually attach the storage in the way that the API say that we need to. So, this will enable both Cinder and Manila on bare metal. There's a discussion about this very topic at 5.20 and the Cinder track that you all should attend if you're interested. And hopefully if we have such an agent, we can also use it for Manila so we can do that final attaching of the file system to a bare metal machine as well. We also, I believe, need to expand the breadth of the Manila drivers. The current sort of scope of Manila as being creating an NFS share that's available for the tenant to mount is limiting. It's not sufficient. It doesn't sort of allow all these other options that are quite compelling. So, that means using things like RETFS9P and NFSproxy and Natives, SFFSnative or GlusterFSnative. And the other thing is that we need to sort of get past this idea that the last mile is isn't always the tenant network. It's not just a matter of plumbing, attaching the storage network to the tenant, but it might be that the Nova is actually needs to do something more intelligent like do this proxying through RETFS or something in order to do that attachment. So, we need this Nova attach file system API or equivalent. It'll simplify the tenant experience so that all the orchestration that happens on top of Nova will work the same regardless of whether you're using KVM or LXC or Nova Docker or Ironic. And it'll allow us to sort of paper over all these technical differences between virtual machines and containers and bare metal because ultimately that's what Nova is supposed to do. Next slide. So, that's it. I'd like to thank HowMyWang again. He put together the performance plots for me and did a lot of the legwork and putting this all together. So, sorry for the technical difficulties earlier. Happy to take any questions if we have time. Sorry, could you say that again? Yes, so the question is I didn't, why didn't I cover object? So I focused just on block and file in this case because they vary when you're actually talking about containers and bare metal. In the object case, it doesn't really make any difference. You're talking over the network to a storage service and so there's no real difference. And I had enough material that I couldn't put it on in the talk. Other questions? So quick question. Do you have some idea like in terms of cross mounts, let's say mounting a file system from one zone to like a VM in a different zone? My understanding is that the Minola is managing the shares and that it's gonna plumb the connectivity to whatever zone the VM happens to be in. Oh, do you're talking about Nova Zones? Yeah, honestly, I don't know. Yeah, talk to the Minola folks. Cool, thanks. Yeah, other questions? Yes, use the mic, it'll be easier to, yeah. So for Aaronic, bare metal and container. So do you think cell file system is better fit for the use case than block device? I think it depends on what the use case is. So the question is whether set file system versus block makes more sense. It really depends on whether you need shared storage or not shared storage and whether you expect that storage to expand or not. So block devices, you create them at a given size, you can make them bigger, but it's hard to make them smaller and it's tedious. So if you're not sharing, then you can use either one. But if you're sharing, you definitely need a file system. But currently for self, block device is more material than file system? Yes. Yes. So yeah, so currently the block device is more mature, but that's changing quickly. And we expect to have production file system very soon. Okay, thank you. Thanks. Anything else? All right, thank you very much.