 OK. Safe on Windows. So my name is Alessandro Pilotti, SQL Club solutions. One of the things we did recently was to port Safe on Windows. Why did we do that? The main idea is that we needed Safe working on Windows servers, because Safe is, of course, one of the most popular distributed open source storage solutions. And Windows Server has still a pretty large market share, especially in the enterprise. Ice-Casagate performance was always like a big bottleneck in our source of complaints from many people. And that's actually where the rationale for this work came from. It's actually an idea that we were floating around since a while. And it actually came to be when we did the work together with Suze. Suze, we had a fantastic partnership with them. And we managed to pull this project off. We had a set of architectural goals. So our main purpose, typically, when we work in bringing together Windows and Linux, which is something that we do regularly as a company. So if you know the history of cloud-based, we did this before. I did it with Open V Switch. We did it before with Hyper-V in Open Stack. We live between these two worlds in the end. In the entire space between the Linux side of things and the Windows side of things. So in the moment in which we bring them together, we want to make sure that the experience can be seamless for people coming from the Linux background on the Windows world and has to be seamless also for people from the Windows world in the moment in which they start using tools, tooling, which was born from a Linux environment. So we wanted the user experience on Windows to be as close as the Linux one as possible. We had it had to integrate in the best possible way in the wisdom ecosystem. And as I mentioned already before, I had to outperform the ice-casic gateway and get as close as possible to the Linux native performance. And spoiler alert, we actually managed to outperform Linux even in this side, at least in the test that we did. Had to be secure, of course. So we live in a world in which we cannot omit that part. So security was our primary goal here. And it had to have support for all the modern Windows error assets, so we support 2016, 19, 22, and Windows 10 and 11 for developing use cases. One of the non-goals was to port OSD to Windows. So the idea was not to make, at least for the time being, is not to make SAP running on Windows independently from OSD running on Linux itself. But the goal is to integrate it in an environment where both can live together. Here is an idea of the architecture. It was a challenging work from a technological standpoint. So we had a lot of fun doing it. It's, you know, the user space side of things, usually when we port projects from Linux to Windows, can usually be adapted really well. But in this case, we had to write something completely different, you know, from the kernel side. Because, of course, the Linux kernel and the Windows kernels work in a different way. So we brought a kernel driver, which is WNDB, that I will explain a little bit better in a bit. And on top of that, and the user space reported the LibreDB and LibreDOS, and we added a bunch of additional components. And then those things are then connecting directly to OSD, which lives on the Linux side. Just one question. How many of you have already tried SAP on Windows? OK. How many of you are planning to do so? OK, very good. Excellent. OK. So as I was mentioning, we ported the LibreDB and LibreDOS. And in general, the CLI surface or the command line interface, the RBD mattepings are managed by the RBD WMBD commands, which also can be invoked directly with the RBD. So RBD device map, un-map, and device list, OK? So as you can see, it's a very familiar command line interface. In Windows, we have services instead of demons. So what you stuff that normally you will do, for example, the system you own Linux, here you do it with Windows services. Because, of course, once you create a mapping, most probably you want it to survive when you reboot. So we have a Windows service, which actually handles that. And as I mentioned before, the RBD command line interface works in a very similar way. Now, the core of the project is this Windows kernel driver called WMBD. For them, technical minded about you, it's a Windows kernel driver which implements a virtual store port, mini port, OK? So think about it like you will see, I don't know, other network storage drivers on Windows itself. How did we implement it? We implemented it with some design architectural choices in mind. One of them was to make sure it could work with MbD via TCP IP in a way which it can be compatible with any Linux MbD server. Of course, this is not the most performant use cases, but it's very useful, for example, for testing, or to expand it also beyond the server realm, OK? So you could use it for doing any type of network oriented storage, right? And that was one of the goals we had in mind. That's why it's called WMBD, and it's called not like, I don't know, Windows staff driver or something like this. Or there is also a local user space kernel channel, which is meant to be extremely fast and performant, OK? So in that case, it's what is the preferred interface. And that's what we used, of course, to have the maximum performance we can achieve out of this. License-wise is, of course, open source. So we are a company which lives in open source, breathes open source, and of course, whatever we do tends to be open source first. So that's where it comes from. And originally it was on GitHub cloud-based WMBD, and now it's part of the SAP Oregon on GitHub. So you find it on github.com, SAP WMBD, OK? We work together with the SAP Foundation. And so we're sponsored by the SAP Foundation. We work together with them and currently we're running a CI. So any changes that happen in SAP gets actually tested also on Windows, which is very important, of course, moving forward with the project. How do you configure this? Well, exactly like you would configure it on Linux. So you have a bunch of configuration files. The ones that on Linux you will put on ATC-SELF on Windows that go in a different location, which is Program Data-SELF. So for example, C, Program Data-SELF is where you put it. I will show you in the demo. What do you put there? SELF.com, keyring files, and things like this, OK? So the typical use case is like you're adding some Windows nodes. You just create a directory, copy over the file from Linux, install it from the installer I will show you in a bit, and that's pretty much it. So very simple, user-friendly, and everything. Beside RBD, so RBD, of course, was our primary use case, the next thing we wanted to do was also to port CFFS. How many of you guys are using CFFS? Cool. So CFFS is something that we wanted to have on Windows. And the nice thing is that you can write files on both of them, and it works like magic because you don't have to care what type of file system you have and everything. It just works. No problem in knowing if it's XT4 or NTFS, it just works. How does it work? Normally on Linux, this type of things use Fuse, now which is not available on Windows. On Windows, which is something similar, which is called DoCanny. It's, again, another open source project that we leveraged here. I will show you also during the demo how this works. It's an evolution of the previous SefDoc and community work, but, of course, there is a lot of changes that we did in the process. OK, Windows being Windows, you install things, not with apps installed, apps installed, or you install and everything, you need MSI installers. So that's what we did. We implemented those on installer. I mentioned at the beginning that we wanted to do things to be familiar for both Windows, you know, dev absences and Linux ones. So on the Windows side, you have a traditional MSI package that you can either just install, you know, with a double click, next, next, next and everything, or you can also automate it with whatever automation and tooling you might want to use, you know, Ansible or whatever else you might prefer. Also, the installer itself is open source. There are continuous builds available, so anytime some changes happens in WNBD or in upstream Chef and everything, new releases are released. And we have, actually, at the moment, two different ones, Pacific One and Quincy ones. And it's available on that link over there. It's going to be moved soon, I think, also on the Sef Foundation pages. So that's the process that we're doing. Last but not least, you know, when you live on the Windows server world, very often you need to address also virtualization. So Hyper-V is a fantastic app provider, which comes with Windows. So we wanted to make sure that we could not only access RBD images from virtual machines running on Hyper-V, but we wanted to also make sure you can also boot VMs directly off RBD images. So this is an example. You can see there, you know, just create a VM in PowerShell, in this case. Add the disk drive. Start the VM. It just works, you know? So you will see a demo. It's actually very nice to see how it has the whole thing works. Performance was a key aspect in all this. Why? Because it doesn't really make sense to do this work if it was performing less worse, let's say, than the Ice-Case Gateway, you know? OK, the Ice-Case Gateway has two problems. One is the performance. And the second one is could be a single point of failure in your organization, right? We had to address both of them. But without addressing also the performance, it didn't really make much sense. So when we created architecture for this, we were very, very keen to maximize and avoid any possible bottlenecks that we could identify. So not only we did that, but we managed also to out perform RBD and BD and KRBD on Linux in our test. For a series of reasons that I can discuss with you, but let's say due to the fact that, of course, we came afterwards, we could see what type of bottlenecks were available on Linux, and we could avoid that, especially on the 3D side of things. WNBD, as I mentioned before, is implemented actually with the ViASR control, OK? So it's something that is the fastest way in which we could have a user space to kernel communication channel. We had a very improved IO worker trace allocation compared to other designs that we were looking at before. And we published also a bunch of blog posts. Again, when we do performance-related research in blog posts, we publish not only our results, but the entire methodology. Because otherwise, we did this work. Of course, we did the project. Somebody might say, hey, you guys did the project. It's obviously faster than the other guys. So I said, OK, now, here's the project. We open source all the tooling. We open source in all the scripts we use. Please run it yourself. If you find the inconsistent results, please get back and let us know. But so far, the feedback we had was extremely positive. All right, time for the demo. It's a very short session. So we have only a few minutes for the demo. And then we will have time for questions. OK, demo. I don't see it over there. Maybe I have to stop this. Yup, there we go. OK, so this is just a Windows machine. So I already downloaded the SFMSI. So the first thing I'm going to do is actually to install it. You start it like any other installer. Again, you can also automate it. So MSI-XX or it's the same automation rules you do with any Windows installer. So if you need to use it with Ansible or whatever rules you might have, it will just work. As you can see, it's LGPL. You can also install just the SFCLI, if you want. But it doesn't really make much sense for our demo. So we install both the CLI and the Windows driver. And that's pretty much it. The driver is signed by Cloud-based Solutions, so I accept it. That's it. I'm going to do a reboot because I had a previous version of the driver installed. So we have a clean thing. Normally, if you start fresh, you don't need to do that. So just quick time for rebooting, one of these ones. So what I have here, basically, I will keep two command prompts. In one of them, I SSH into a Linux machine, which is, of course, connected to my SF cluster. And on the other one, I will have the Windows side. So you can see actually both things at once. I will show you that the commands have the work on both sides. All right. So for example, let's do an RBD list. So you can see I have already an image, which is called Hyper-V1, which is already created for one of the next demos, which is obviously about Hyper-V. What I'm going to do now is to create a new image. I do it on the Windows side. So you see this command prompt on the right is Windows. So if I repeat here the RBD list, voila. So same identical experience. Just works the same. Thank you. Next, now we do the mapping. So in the moment to which I map it, Windows will discover that there is a new disk. It tells me, of course, it's an empty image. So it's an empty disk, so I have to initialize it. And this is it. So now I have it here. I can create a volume. It's 10 gigabytes, as you can see. NTFS, whatever. And that's it. So I have my volume there on Windows, perfectly visible. I can map it later. And I can treat it exactly like any other disk that I have on Windows. So very, very simple and everything. Among the changes that we are working on, and we will also add some points, we will add also support for clustering, especially CSV volumes and things like this. So at that point, it will be completely transparent, independently of the fact that it uses SAP as a backend. But again, the goal was to make sure that it was seamlessly integrated in the Windows ecosystem, which is what we did. The RBD, WNBD command can also be used for listing. Here you can see my mapping. You can see the mapping over here. If I reboot the machine, this mapping is persistent, so it will automatically be reinstated in the moment that when the machine restarts. I can also do transient ones. Maybe I may need it for testing and stuff like that. In the majority of the use cases, you will want persistent ones. Good. So that's the first demo. Now let's talk SAP FFS. So SAP FFS volume LS, right? So on the Linux side, I don't have any. Creating one. Here it is. Now on the Linux side, I'm going to mount it. And we want to create a simple file inside of it. Something cheesy and simple, like hello from Linux and whatnot. So I wanted just to put a file in there, you know? Because I wanted to show you how you see it from the Windows side. Let's go on the other side. So this command here mounted it in Windows. So if I go back here, you see this new SAP X, because I told it to mount it with the X letter. So if I open it, voila, I see my file, no? With all its content and everything. Now I can create a file here. See me like hello from Windows. And if I go on the Linux side, voila, there is the other file. So this is, again, very useful to bridge the gap between the Windows and the Linux walls, right? Good. So that was demo number two. We have another one, which is about Hyper-V. So I skipped already a part if you do an RBD list. There is already a Hyper-V1 image. How did I create it? I simply converted it with QMMG. I converted, in this case, a Linux image, which would be an open Susie, actually. So QCao 2, I converted it directly in an RBD image. I could do it also live, but it takes a few minutes to do all the work. Doesn't really make sense to stay here and have you wait for that. So it's already done. And what is the purpose of this? The purpose is that I am going to mount it on the other side. More exactly. So I can actually terminate the mapping. So now it's mounted. If I go on my disk manager and I refresh it, here I have it. You know, disk two. You see, unlike the previous volume, which was empty, this one already contains something. You see all the partitions, of course, of my Linux machine. In order to use it in Hyper-V, since I have the rule to automatically mount any external disk, I have to put it offline, which is what I'm doing here. Now I can just create a VM. This is an empty VM, obviously, without anything. And here is the simple trick that will add up. You see, I'm just telling Hyper-V, in this case, to attach a disk that will automatically go and pass through on that particular disk that we mounted. So fully transparent. So if I do a start VM, there it is. And here is my Linux machine booting, my open Susan machine. And voila, just works. Because we don't have to wait to boot and everything. The important thing, as you can see, all the IEO, it's actually happening to RBD in this case. So it goes to the driver. It goes to LibRBD and everything. It goes outside. And it's actually served for the SEV cluster, which is running somewhere on our Linux cluster. Good. OK, that closes the demo. We have exactly two minutes left. Quickly resuming the presentation. OK, some information about where to get the data. You can find us, of course, in the SEV community. And of course, we have a boot here, cloud-based boots. If you have any questions, please come over there. Any questions now? No, it's actually the same command, which goes across to the browser. Yeah, just a wrapper, yes. Yeah, correct. It's a bit of a long explanation. I think it's better if we talk about this in a boot. But it's largely how threads are used and implemented. So we expanded and we used as much as possible, let's say, multiple CPU threads, to be able to achieve that. Especially also in the user space to kernel communication and things like this. So maybe casing IEO? Yeah, yeah, right. Right, any other question? Well, we tasted mostly active passive in our use cases, but let's say, from the ice-cazy perspective, it will outperform also the active active. So we can perform tests also on that side if needed. Correct, yeah. Basically, you skip an entire layer at that point. So it's more performant by design, in the end. Any other questions? I didn't want to say that. OK, yeah. Yeah, OK. Yeah, 2016 is the first we support. So 16, 19, 22. And of course, for development use cases, let's say Windows 10 and 11, right? So let's say, technology-wise, it works on all those platforms, right? OK. OK, anybody else? OK. We have many, many users, because it's open source, right? And so we don't have telemetry coming back and everything. But we've got many, many companies that reach back either for thanking or for expressing questions or expressing, asking questions about the future roadmap and everything. So I know that there are many classes that weren't used earlier. The funny thing about working open source is to do it a lot of times, is that the general point of view is that when people don't complain, it makes it works well, because they very rarely come and say, oh, it works great. They usually come and say, oh, this thing doesn't work. And actually, I can't wait, hopefully, to be at CefaloCon. And I will do another presentation, and hopefully hear more from the community over there. OK, thank you very much, guys. If you have any more questions, I will be at the booth. Thank you.