 Joining us, we're now in the shorter sessions part of our dev room program today. So this will be a 25 minute presentation. Again, if you do have to leave at any time, I invite you to go out the back door. It's a little quieter for our speakers. And so without any other interruption, I'm pleased to introduce Stefan Heinatzi from Red Hat who will be talking about KVM, or I'm sorry. In VDEM under KVM. Thank you so much. It's happened these kind of like this before. They have the most specialized products in these kinds of things. But now, the JDF, whatever the standard body has you know, released these standards that means hardware vendors and firmware come together and build hardware software that actually work together as a wider market. And one important thing before we go into detail as we're going further is that in VDEM, and there's probably a lot of NB, NB for non-volatile, there's both kinds of non-volatile, heavy technology. Both of them are being one of the biggest contributors of an active company in that period of time. So they're often confused with each other, but actually. So what's the difference between NB and NB? It's a TCI-express flash drive standard. It's a hardware interface standard. And so what we get when we get NB and E-card, this year's NB and E-card is also not a DR4 memory module. It's not a memory module. Also, that's what you put in your motherboard. So that is the part that you put these guys on. In the right case, all of that comes from the box of data. These guys are actually in that one. Well, NB and E-cars are different. NB and E-cars are connected. So that means that the IO that you give to it, the access that we made, they have a hash line, so it's just like random. You need a right hash line. Basically, where all the speeds are online, the consequence of all of this is that if you go online now by an NB and E-c-i-express card, what you get is even like cards that are around two terabytes in size capacity. So it's like this, except it's very, very fast. NB then is not what you do like that. Because remember, in addition to you, also, you have a key around any kind of flash. So getting two terabytes of key around on the chip is not going to happen. So what I think you'll see between the first devices, I mean, what I see is that the quality around 16 gigs, or 16 gigs, like with the RAM on chips, they're not going to happen in the future. But it's the latency that's going to go up. The latency brass is going to be needed in memory. It's in VR port. So it's a completely different order. It's not much faster than the NB and E-c-i-express card. We have today an E-c-i-express card. So yeah, if you can get faster power, it's going to be consistent. It's still a white-skinned thing. What can we do with it? We're going to use in-memory databases are kind of the most obvious and immediate thing that we want to do. You have some kind of a database, you value storm, either you want memory badge or something like that. Added persistence numbers. Even if they have limitations, the performance is not bad. They're kind of both on. And a lot of users actually run things like, you know, Redis without persistence or anything. So it would be nice to get the persistence for free, simply by putting it in and needing it. It's not true that you'll get the freedom. It's actually a long process. There are some other applications, like databases or files, because what they have is they have transaction logs and other types of metadata that we need. And that's how long to do that on Redis. Because, of course, you need to have the same and the same background. Instead, the virtual memory means the kernel can not be in any piece of the map. It will be fine in any piece of the map. Of course, it's going to persistently store it. We're going to save things. It will call the data. Even after the machine is shut down, even after the program has stopped. When you start, you need to wait for the data to get back to you. What's the best guide? Both simple solutions. The file systems are being used for files. It's still there. Even though this is not working, you can put a file system in it. And then you can create files so that they're different applications. You can access your NDE and you can store it off. So that's what we get there. Underneath all of this, despite the files, when you have a partition table and updates, NDE itself has its own level of structure. It has its own way of organizing the data and do all the nameswaces, which are kind of like they allow you to slice it up. Underneath that, it also has something called the agent server for organizing the relationships. I think the main message there is that the file systems are going to play an important role even with the system. First, the file system can fill out a latency. And that's not going to be easy. It's accurate. Probably the most important thing in this presentation and to learn about NDE and why it's unique is this right here. It's active in my asset. So this is the... Initially what happens is we have an application where people open a file and have the file name. And then we use read and write system calls or something like that. And after that, we access the call in the words, actually, access the data. And what that means is that it's doing this in calls switching kernel loads and kernel VMS and the file system driver decides, you know, either we access this data or we have a block layer for N to take the IO as an active code and medium for the read and write box data. Back to this flow, there's overhatch on that. So we have RAM going through all the players and she'll access IAM 10 from accessing that. And so what we do is the application is open and we have a file and we pass it over to the IO app and then we to the application graph. All the access to that go back to the straight and medium so we're not using system calls we're not using write we're just using loaders or instructions. So that's, you know, this is where we come to bypass the entire IO stack. Of course it has all the instances, for example, the IO statistics we will not be currently going to bypass it. We're currently not just seeing the block layer we can see that all is happening there. So you notice, by the way, this feature that we didn't want to directly get out of it, if the file is to look out to the board for it then you will have to always reserve even only when the whole stack and all the stars are aligned and it's good. So that's important to know that RAM and the expected things are actually working. There are actually two ways to bypass this and each of them sometimes is not enough to understand and explain how it works just in memory mode which is black and brown and what happens when you bypass the application to loads and stores from the end of the day. That's very efficient but loads are just regular loads and stores are a little bit special though. That's why I say you can't expect a normal a normal program on that so there's a power failure to make sure that the data is reached what we call a persistence domain. Persistence domain is that part of the architecture where we thought this power and behavior and the team and the data that feature and the platforms that need to be to make sure that you flush the CPU passion and not even in CPU passion that can talk but there's a power failure to make sure that the instructions are in the CPU passion so what else is different about this compared to normal loads? For a start you don't have to return that power failure to be in your EIO but the thing is that the air handling is electronic the only way you can get air information are going to be designed to handle that if you write a program pilot it doesn't have if the memory access fails you may write an error it follows the RAM memory and memory's error it's massive what else is different is that the data which are address based on the program's rules on the address because of a writing just a matter about the data so it seems like a good idea to only have the data actually otherwise it's a little bit busier as it is it's very on my quite mistake so one window that's the other instruction it's more a particular place and so this is a sector-based or at least software-based IOM where instead of doing the RAM like passion accessors this policy is much more the traditional model that the entire IOM staff already knows about the files lots of work have been it also doesn't involve the whole end map of business so that's not the data of the RAM it's not a map that you're addressing it also means that the program has a lab you're not a scribble over so which also means that instead of working on it it's really fast great things that's not the case when the old staff is set up even if the application is virtual it actually goes all the way back down there's no external running games being dead and having the project on top of it and not all of them support that system because they have that so what I'm sure there is I just showed you that when you buy a service that's physical and you get you'll be able to pass that that's where I throw it but also don't have the stores that you do that the model doesn't guarantee that but it still needs to be so also it needs to be in the process the data from the disk the creation files from the sense we requested to do this and what that means again is that you have to be on the exit which is out to take the price of the world or recognize it I don't request to go on the service please you need to copy the data into the government to remember anything that you do if we don't do the data file when they they will just be able to access it there's only one copy on that one process so that's time there are some projects that you can use this kind of thing like the lightweight containers again stuff that's out you know for that storage system to get why don't we just have all the data so there's that there's also another that's that's pretty much the end of my slides I I just want to say that depending on which players if you're interested in their aspect you want to learn about hardware there's code stuff which you care about including the there's documentation that's being merged right now because that's the only data that's exactly open not in my documentation that shows you how great that is the patient but there are not in my house basically what we have at the moment is that when you start to import that it's going to be then you have the drivers you need to use the immediate simulation and the version is not actually right now I mentioned laptops I'm just wondering if the NVDM since it contains the dim component can be used as a regular memory actually it's basically be safe for power failure in a normal sense not in a VM basically you guys able to set some markets so I'm not sure I'm sure you didn't have to do that yes it is just VR but it's application which you can check and see how you do that and see what we have because it doesn't be there on the other hand it's just VR for ranches which you trust to be VR for runches why you need to go for that one because it's just VR for ranches which you trust to be VR for runches why you need to go for that one