 Well, I'm the last one. And thank you for attending. We're all tired. I'm tired. And hello again for anyone who has been in the morning. So now I'm going to talk about Seth, where are you from? So I'm going to talk about what is new with Seth, with the last few releases. So we're going to talk about you in Joel, who's the latest long-term version, and what's new with Karkin, that's where we start, and Dominus. So Seth has a long-term release and short-term release, one after the other. Those are in the numbers. All of it. So Hummel was in March 15, and actually Hummel 10 is out now. I think it will be the last Hummel version. In the finale, it's been November 15, and it's a short-term release, so it's not spoiled anymore. Joel went out in April 16, and it's been a proposal for a while now, and Karkin is out of January. We are really sad that it's a short-term release, but that's life. And Dominus is starting in April 17. So let's start with Joel. So I'm not going to go in detail into Seth, so he's very complex. I'll go really quickly what Seth is. So we have three services that Seth provides. SethFS for a positive price system. We have RBD for blog storage and Redis Gateway. I hope everybody here, I assume, knows Seth and doesn't need more details. If so, you can see the video from the morning. So we don't have time with the short presentation. They all use a library for object storage API, LibRedis, and underneath we have the Seth cluster Redis that provide the distributed object storage. So first we start with SethFS because everyone always asks me about SethFS. So we waited long time for SethFS, and it's stable at last in Joel. But we have several limitations. We only recommend using single active MDS, the metadata server. You can have many stand-byes, but only one active, and do not use snapshots. We have repair and disaster recovery tools, and we have integration with Manila, and we add a lot of authentication improvements. So this is a bit more about the MDS. So we want to scale file performance. So the IOPAP is scaled by adding OSDs to the systems, and of course using SSDs is always good for your performance. You can have lots of files and a very large file system. The MDS has cache to better performance, and the cache is limited, or the cache is related to the active set you use, and not the number of files you have in the file system. And you can add more RAM to the MDS to improve the performance, and we always recommend using SSDs for the metadata pool. So this is a picture of the MDS. So the idea of MDS, it's a metadata server, but it's dynamic. You actually, each metadata server handles part of your file system according to the load. So a metadata server can actually serve one directory if it's very loaded, and the idea it will be active, active, and dynamic, and we actually can see that it stops being loaded, and maybe throw all the directories. So all the files in the dynamically repetitions. We have consistent caching, and so the client can use cache, and they are coherent, and we have invalidation in cache of update, and super-blocking. We have the outstarts and the stuff. So snapshots are disabled by default in Joule, and we hope by luminous they will be enabled by default. Snapshots are object granularity. RVD has parent snapshots, and services can snapshot any sub-director. Let's move a bit. We have FSCK and recovery tools. The FSCK runs, we have online operations to fix your file system. At the moment in Joule, it's manual, and in Karkin, the scrubbing will be automatic in Karkin or the moves. And we have disaster recovery tool to rebuild the file system in case of failure. There is integration with the Manila. It's ongoing. The moment Joule is integrated into Mitica, but I don't know about the UF OpenStuff version, unless you involve me in OpenStuff. Are you Joule stuck? We want everything as a safe user, don't go into it. We have SELinux support. System MD is complete. We now also have Safe Ensemble to install Safe Use Ensemble. SELite bus completion and Calamari on months. We built four arms. Four arms, we have four arms built. And R64 for center server on Bundle, and the armor for Debbie and Jessie. RVD. So RVD supports now image mirroring. That means you mirror the image to a different node. This is asynchronous application to a different cluster. The replica is a class consistent. The replica is per image. We use a data journal for an image to support the class consistent. There's a demo for RVD mirror. It needs to be that does all the work. That's other RVD stuff. Let's mirror those. That's deep. Deep flattening. You can turn off many features. Turn on or off. RVD.U and better CLA. Better sketch way. So we have been doing, we've written all the multi-side applications. So we normally use the sync engine for your data application. Everything is done by the RVD sketch way itself. We have active, active end zones application. We have fell over and fell back. And we simplify the configuration. We also have NFS interface for RVD sketch way. It supports currently only NFS with walk. It's used to import or export data to the object storage. It's based on NFS. We have index less bucket for users who are using lots of object in the same bucket millions of ones. And those are buckets that don't maintain the index at all. You cannot list them. But that's make the operation faster because we don't maintain the index. Currently you cannot use data application with indexed bucket. Hopefully in Karkin, probably Luminous will have data application also for indexed bucket. And lots of updates. Most of the Swift stuff is done by Mirantis. So I want to thank Mirantis. I don't know if the other one is from there. So for Swift, we have Keystone be free, multi-tenancy, object exploration, SLO, block delete, object versioning, and RFCOR compliance, RFCOR is a test. And for S3, we have AWS for authentication support, LDAP support. STS is not in dual. STS is good token server. It will hopefully be in Karkin probably Luminous. It's a token service. Amazon uses it. It's very complex authentication that I'm going to use. It's external authentication service. So this user can get from that server token, and that's his pass to the storage. And he does the authentication. Yeah, exactly. And lots of S3 user once. So now, we improve queuing and the monitor scalability, lots of optimization, new implementation passing manager that will be in useless thread and perform better. This is important. We no longer support XT4. It really was sad, but because the accessibility in XT4 are too small now, especially with RBD and also with others. Getaway, you can get corruption because the accessibility is too short. So we only support XFS. It's very important. We improve caching. And we added a second ResAccording. Fujitsu added that. And BlueStore. So BlueStore is very important, too. So CEP is built on top of a file system. In the end, we store data in a file system, and there's a layer we call FileStore, the data. Currently, we do it on XFS, but we noticed the right performance. We have lots of latency issues because of the XFS layer. And also, it's much more complex to use it because we actually use several files and we need to synchronize, update in that. So BlueStore removed that layer, and instead of using XFS, it actually uses a device, or device, and also it uses ROXDB for some key value store for the material. Hopefully, BlueStore is just beginning. Hopefully, Luminous will be the default. Now we're talking about Cache and Luminous. So the idea is to make BlueStore the default. In Cache, it will be in, but it will still be the preview, but in Luminous, hopefully it will be production ready and you're going to use only BlueStore. We already have, actually, RazerCode override for RBD, so we can actually use RBD with RazerCoding pools. SafeManager is a new component of SAP. It's like a monitor, but it's external daemon. It's written in Python. It will have recipe eyes and it will have metrics. First of all, it will remove some load from the monitor that does other stuff than actually the clustering, and it will also improve our metrics and can also have with managing SAPFS. We'll add in on the wire encryption, optimize the IR path for the OSD, make the peer faster and more quality of service and set this support for DMCache, BCache, and FlashCache. We have the DMCache, I'm not sure about the others. They hopefully will be there by Luminous. With SCARE, again, the STS, the Amazon Lyco, Crebers, Plugable Foods on Syncing. So the idea is that we can absolutely export metadata out, and we can use that first of all for metadata indexing, metadata search, so we can export the data into elastic search and then the user can actually search the metadata. It can also be used for tearing either for tape or for cloud storage, like public cloud. We have encryption, thanks to Murentis and Compatial Object, and we are doing lots of performance improvements. There's work to scale up the performance of CV12 content, and we are looking at our performance with the bucket with lots of objectings and so on. For RBD, we continue improving the mirroring, adding high availability, delayed replication, and cooperative demos. We add in client-side system caching and encryption, and lots of improvement for the kernel client performance. There's RBD backed LIO, ISCSI targets. We need ISCSI because the user wants to use firmware, and the only way they can use RBD is with ISCSI. We add inconsistent groups. The surface app, we hopefully will have multi-active MDS, makes the surface more scalable and more performance. Snapshot will be there. They are not yet in Cochrane. And there's Manila integration. There's integration with Samba for SMB and Ganesha for NFS. And we'll see if we have the rich ACLs. There's Mantel, that's a Lua plugin for multi-MDS balancer. We add in some directive implementation improvement and static support. Those are cool stuff. We have second-samba, third-docker. We support IPv4 and 6. We have PMStore. This is the backend. This is the NVD backend support from Intel. And we have LibRiders backend for XDB. So it's lot of stuff. Anyone has something specific you want to ask? What is self-unceivable? So self-unceivable is a way to deploy self-inuncible. And self-docker is a way to deploy self-in-docker. So in Samba, it says all the Yama Falco ensemble and how to... And self-docker, I didn't play it for a while, but it's a way to run self-incontainers in Docker and deploy it. That works. Yes? Do you have the link in your... Yeah. Yeah. Every time I've asked for some digital auto drives, it's saying, oh, well, we'll come back to you. So I've never heard of any one that's for... for... Yeah. Need to, again, the 500... The 500 or speeds for the use of drives or speeds instead of having a server. Yeah. Well, it's an external project. So you can try on the mailing list. Sorry? Can you try on the self-mailing list? See if anyone knows about it. Probably say it, maybe. Put it in the same cluster without self-manager. Yes. Self-manager is an additional demon. Do you know? Do we have to have self-manager in the cluster? I'm not sure. So the purpose of self-manager is to offload a lot of the stuff that's in the mod right now when it comes to, like, the API access and statistics and analytics gathering out of the cluster. So if you don't have anything that's using, like the Calamari API or you're not running some sort of a management or a movie dashboard or anything like that, then yes, you should be able to run a cluster without self-manager when it comes out. Yeah. And is it possible to just problem for, for example, the tunes that come in this in a single cluster to provide an entry? I don't think the question of horizontal scalability has been quite answered yet. I mean, obviously they're going to want to ask a question, whether it's off-stand vibes or master-master heads, but I don't think that question has been answered yet. Yeah. But my thinking is that yes, you'll be able to have multiple self-manager in the cluster. Yes. I'm also going to say that most of the assertions are still close. Should I take you or should I add a third factor? Okay. He wants a self-production, a small self-production and he asks if you do all the crackings. So, of course, you'll always long-term crackings is short-term and it's just out. So that's me and it has lots of new cool features. It's past some testing we do upstream, but most users never used it. This is check on read for your object error. So does this mean that the deep-strap interval is decreased on those keys or is it still the same? I don't know, actually. Because I think they're not really useful this anymore if it's check on every read. Yeah. I'm assuming one need this problem. But I don't think Blue Story is there yet to that point. Yes? Well, the idea is to try to limit... Well, it's more for hyper convergence so we can really run, let's say, OSD with other servers and we can try to limit the resources or at least have quality of service enough for OSD to run in case of an error. I think in general set is not really strong in its quality of service. Do you have anything to add about that? Can you just tell me about the performance increases in the service? There was a big factor and... The assing messages you're talking about? Yeah. So the performance per assing message. So there was some further issue. It spawned too many threads. Yeah. Sadly, too many threads usually it's not good for your performance. So it's less threads. I don't know exactly the implementation. I wasn't that involved with that. But it's... Does this mean you can now start to be able to do that easily in the same way you started to do it? Yeah. Yeah, a lot of the scaling issues have been addressed. There's actually a collection of large-scale users that have been getting together to see how far they can push that. So it certainly is one of them. There's a new set of large-scale users that you're interested in. I'm not interested in that. I would like to know if there's some really good stuff there. Yeah. So the 20s, we are much more. Yeah. Any more questions? Can you also remove... Cache theory? And... We've moved... Maybe... Sometimes we either decide not... This is upstream, but... But the center decided to support something that up-screen may continue to do. It's not always one-to-one. I think there was some issue where in some cases, Cache theory provides some bugs and causes some corruption. So that's why it's not supported. I actually believe it's going to be supported. It's to my knowledge it's not going to be stopped. And again, many of these are always good places to ask. I know upstream, I think they're going to kind of rework it and how it works. So I think there's a lot of designers and people that have dynamic Cache theory both up and down to be able to push things to a hot gear and push things to a cold gear using like blue filters and things like that. As far as Red Hat's commercial support, that... One of the things that I've been having difficulty with my latest contract with Cache is finding new spaces for the center. Do I agree with your approach around contracting? Or can I use it for what in terms of clients? And this sort of interview, I'd be quite glad to hear that. I know. I didn't talk about it. So I can say, for example, we're going a lot with talking about big data. So a dope on the Red Skateway is something we're talking about. CFFs should be also performing well for all of that employment. Could I just ask, actually, as a reference to S3 and I'm going to use it as a... What? ...affiliate. This is a devian packaging, mirroring and... So, thank you.