 Great. Thank you, Christine. Before I get rolling, I would like to say thank you to the Linux Foundation for organizing this webinar. We worked with some really great people and they pulled together a really great show. So, Christine, thank you very much for your help in pulling this together. So, I am Scott Stetzer. I work for Kyokusei America in the Memory and Storage Strategy Division. Kyokusei, of course, is one of the largest manufacturers of flash memory for the industry today. Back when we were known as Toshiba memory, we invented flash memory. So we know quite a bit about it. We also deliver quite a broad range of SSDs to the industry, and my particular group, the Memory and Storage Strategy Division is focused on bringing that flash storage and software expertise forward for you, the customers. So data storage is a pretty interesting topic. At least for me, it's a fairly interesting topic. Ever since the very early phases of we human beings, storing information about ourselves has been a thing. But as far back as actually putting paintings on cave walls. Of course that has progressed over time through things like hieroglyphs and stone tablets. And when they invented ink, we were putting ink on papyrus. And then we, we get to the industrial age and a very interesting change happened and we were able to put print on paper to such a degree that our data storage mechanism became books. A very useful approach to storing retrieving data. A little more appropriate to us as we move into the computer age within this last 50 to 100 years. The first method of storing data in the computer age was of course punch cards. A very mechanical process. Then we changed to linear magnetic storage, which of course is tape and tape was a very sequential access methodology using magnetics. The next big innovation was to change that magnetic approach from linear to rotating. So rotating magnetic storage. What we know today as hard disk drives and used to be floppy drives. And again a very mechanical approach to storing data. What we did for us that HDD paradigm, and that HDD paradigm has evolved over the last 35 to 40 years. When we started out that hard drive was probably about the size of the average washing machine. And over these 35 to 40 years we were shrinking the footprint of that storage mechanism. We went from washing machine size to 16 inch to eight inch to five and a quarter inch platters of magnetics, and we settled in the three and a half inch to two and a half inch range today. And while we were doing that shrinks we were also increasing the amount of data that can be stored in that ever shrinking space. Now the consistent bit about this is all of this was designed quite a while ago, all before flash was invented as a storage media. And again it's a very mechanical approach to data storage. And that mechanical approach means it's pretty much a black box. It's not terribly flexible you can read to it or you can read from it or write to it, but the ability to adjust how that device works is very limited. There's no control over latency in that mechanical spinning disk. Now over the last 10 to 15 years solid state storage in the form of flashes come into play and SSDs were quite a game changer for us in the industry. Now, initially, SSDs, they started out simply as a faster form of HDD. And this was pretty important because in order for those SSDs to become adopted, they had to plug right into the existing storage paradigm, which was the high drive SSDs today continue to be extremely important for all sorts of applications, especially consumer applications and enterprise server ecosystem applications. I can't imagine running my laptop today without an SSD in it used to have hard drives, but now I can't imagine running without an SSD. The challenge though is even up to today while all of this backwards compatibility is phenomenal and important in order to keep compatibility up. We're still limited. We're still limited by that mechanical HDD paradigm. And there's a range of storage innovators out there that are looking at new ways to bring all of this capability that flash can actually bring to the equation on beyond that HDD paradigm. One of the key innovators, of course, is the cloud or hyperscale data center. They're looking for ways that they might be able to bring that full capability of the digital storage media flash storage to their application development. So why is it that the HDD or cloud data center is a center of innovation. Well, one of the key aspects of this is they're vertically integrated, they own the entire stack in the data center. They can work to the server, to the storage to the applications, they control every aspect of it. And they're creating all of those new and wonderful applications that we use every day and expect to work seamlessly and fast. So while they're doing all of this work they're redefining and looking for how can they make storage work more efficiently for these new applications. And all of this of course in software so they need that capability to develop storage at the speed that they do about their software and their applications. The hyperscale or the cloud data center of course is built on being software defined. They've had tremendous success in moving to a software defined networking approach. The computation is there software defined hardware is there they're starting to see a lot of use cases for FPGAs in hardware definitions. There's even software defined storage layers out there taking all flash arrays and delivering service approaches at the storage layer. What's missing, we have this lower layer of storage mechanism the drive itself. And what about the flash, why isn't that software defined yet. And that brings us to the talk for today's presentation. What if flash was software defined. And we have to ask ourselves, what would we need to do to make flash storage software defined. So let's explore that a little bit. First let's look at some of the challenges that the industry needs to solve the industry we talked to a lot of the hyperscale customers and we've been able to gather information from a wide range of them about what it is they're looking to do and how they look to accomplish it. So, they're looking for data placement. They know their data they know their applications they know how they want to store the data so they want to be able to have finite control over where and how they store their data. At the same time, they're in complete control of their workloads, and they know their tenants and how to isolate the workloads from one tenant from another, they're avoiding any cross pollination of IO, noisy neighbor syndromes. And they want to be able to control latency, they want to handle how fast they can get to that IO and control that so they can deliver quality of service to each of their customers or each of their applications. So we all know workloads adapt and change over time so they want to be able to adapt to new workloads on the fly as those workloads change. On the other side of this slide we're talking about a couple of other things that have to do with the hardware itself over provisioning at scale. Inside of our laptops we over provision and we raid the flash die inside that SSD. It's a single device. We want your data to be as safe as possible. And we want it to work at its most optimum speed so we over provision and we add rage the flash die. What this means is we're reserving several bites out of that available flash in order to handle that over provisioning and that rate function. The hyperscale data center, typically they're going to want to have access to every bite of flash, because they're handling data integrity at a higher level they're handling it at the rack level not at the individual device level power loss protection circuits we put that in every SSD as well. Although most data centers, they had power loss protection or battery backup at the rack level so power loss protection in an individual device is probably overkill. So a software defined approach can help solve some of these industry problems, especially the workload tenant and workload problems. So look at these problems and say okay how can we solve these. Well, the first method that we need to look at is, we probably need to abstract the flash with a software based programmers interface and API. Now this API would have to handle a number of key aspects that would include how do you handle the low level flash itself. There's flash programming in terms of voltages and timing. There's ECC there's dealing with bad pages and bad blocks of data. The API would also need to look at the differences between different types or generations of flash and abstract that as well as differences in vendors flash and handle that through abstraction as well. Hardware isolation, all the way down to the flash die level would be the ability to solve that noisy neighbor issue that I described earlier. Which is important. How do we make sure that the software and the application has discrete control over that workload, so you don't get any performance lags. And then another aspect is that host and that server CPU is there to do applications functions. So is there a way API to abstract the flash in such a way that it doesn't become a burden on those resources reduce the number of host CPU cycles that need to be done for certain features like garbage collection or where level. The next question that we would ask ourselves is, if we were able to do a software defined flash approach, what benefits could be derived from doing so, is there a valuable service that can be performed by going software defined. The answer is pretty clearly yes. Today, you have to buy different types of flash devices to serve different purposes in the data center fast one slow ones, cold data storage hot data storage, all come with different attributes. In this case with a software defined flash approach, you can deliver a single adaptable flash device that can then be repurposed for different use cases. We're focused on all those low level flash problems like programming, erasing and ecc and error management. And through software, it could easily be adapted to new level, high level requirements at the post change the protocol that the device works so it can be used for different use cases or data types hot data or cold data. We want to be able to reuse large amounts of code that we've deployed for each of these flash devices, as well as bringing through that code reuse faster internal development cycles, as well as simplifying the maintenance of the flash devices themselves. So if we look at this and we say okay what do we really need. We need to bring the speed and flexibility of flash, how do we get that maximum potential out of the flash itself. And we need to combine that with the ease of software definability. How do we get all the benefits that flash as a digital storage media brings us without all the traditional overhead of the HDD paradigm. Well, I'm actually privileged in my job here at the Oxy America to work with a very large group of smart and creative engineers, both here in the US and in Japan. But rather than just bring you all the challenges and the problems I'd like to also suggest a solution. There is a software defined flash API available. That's called software enabled flash software enabled flash is a new approach that's designed to bring flash storage innovation to you the developer. It's a powerful way forward for flash to be deployed in data center and emerging storage applications. So we really shed the legacy HDD paradigm. We'll allow flash to behave like flash, which then allows us to bring that flash forward to behave in much more predictable and uniform manners that can also give you the ability to define better latency outcomes for your application. And flash can be used at its native speed software enabled flash brings that software flexibility and that scalability you need in the data center to you the storage developer. It's really a powerful way forward. So what is software enabled flash in reality. At a high level, it's really about the hardware and the software working together seamlessly. So let's look at the hardware portion of this briefly and say, on the left of this slide you can see a hardware block diagram. And in this case this is a purpose built media centric flash controllers specific to deploying flash using the software enabled flash API. It handles all of the low level tasks of managing the flash itself flash page programming timings and voltages that are required handles the health of each individual flash cell brings forward the use of ecc and defect management within the flash controller. We've seen from some of our customers different and varying needs for how they use DRAM in the device so it can deploy DRAM on the device, or you can reduce the amount of DRAM and or completely eliminate the use of DRAM on the device itself. One other thing I'd like to highlight. We of course being Kyoksia we deploy our flash using a flash interface known as the toggle toggle mode interface. In order to make this controller capable of using different types of flash, simply go in and change the flash level interface to whatever is necessary to deploy the next type of flash. It's a fairly easy and straightforward process you can change that toggle mode out, employ the ONFI or on fee mode, and a different vendor could also be used deploying software enabled flash for your use cases, a multi vendor approach. So if we change the approach and start looking at the software side of this software and hardware working together, the software stack is shown here on the right. Of course, we have a device driver that talks to the hardware and the controller itself to bring that flash forward. We have a software enabled flash API that comes forward in a library that gives access to the flash, either through the file system and the traditional of block level drivers and reference FTLs of the operating system. We have sample source code that shows how to implement that. The hyperscaler, the cloud customers also have defined their own approaches to this file system and operating system stack. So it's very easy to take the SEO, the software enable flash library and apply it to those custom software defined storage stacks. But what I'd like to highlight really quickly if I can is notice that the library, this section in blue actually runs all the way up to the user application. You can use flash native semantics through the software enabled flash API and library to talk directly to the flash device by bypassing the traditional file systems in your OS. And to demonstrate this, we do have a couple of interesting applications that use this flash native approach. We have a version of rocks DB that talks flash native through the software enabled flash API. We've got a version of firecracker for virtualization that also talks directly to flash in the flash native mode. And for testing purposes we took the IO generator known as FIO and adapted that to the flash native API approach as well. Now, all of this software aspects that I've talked about are going to be available through open source, the API and the libraries are already up and available in open source. Later this year we'll be bringing out all of the sample source codes in the form of an SDK, and they use those will also be available on open source. So bear with me I'll talk a little bit about the open source strategy at the end of the presentation. But let's take a look at the programming capability. There's a wide range. And again on sample source code is available of different aspects of enabling and deploying software enabled flash. So anything from a CLI that allows a host to automate and batch file up the creation of virtual flash devices using the SCF API, all the way through data handling commands and event handlers, as well as our data path commands have a certain range as well. So a pretty flexible library. Now, talking about programming and source code via PowerPoint is probably not the most exciting thing I myself can be doing there's some really smart people that could probably make that interesting, but I don't think I can. So what I'd like to do is change the approach and not talk so much about what the source code looks like, but about how do you use it. How do you actually turn on and enable and deploy software enabled flash capabilities. I'd like to do this through some demonstrations. First we'll talk about how you define a software enabled flash device data placement and workload isolation will be a big part of that definition. And talk about how we can deploy different protocols on different applications on the already defined and isolated flash device. After that we'll talk a little bit about latency and how you can control latency all the way down to the individual IO level using advanced queuing methodologies built into the API, as well as we'll do a quick demonstration on how we can offload those host CPU resources for better program efficiency in the hardware. So let's move forward and let's talk about how do we define a software enabled to flash device and how do we use that definition to isolate workloads from each other. So the first thing we need to do, you can see on the animation that I have here, a typical layout for flash controller and flash die we've got eight channels with four flash diaper channel. And that would be let's isolate hardware all the way down to the flash die level. In this case, isolation by flash die is done by what we call defining a virtual device through the API. In this case we'll take blocks of different blocks of flash die and create virtual devices so you can see in blue virtual device zero in red virtual device one and in green virtual device to create layouts different numbers of flash die but very easily to define completely isolated zones of flash. Now on top of these virtual devices, we can layer a second layer of isolation through software what we call a quality of service domain. So this software layer allows us to define different protocols that we want to deploy in each of these domains. We can control latencies and different operations through these software isolated zones quality of service domains. Now that we've defined a device, the real purpose for having storage is to be able to run applications and store data. But every application and every device needs to run in a slightly different manner depending on your workload. So let's take a look at how we can deploy different types of protocols on these defined locations within this flash device. So let's start by deploying a typical web style application. And of course, let's start with the simplest mode which would be a traditional block motor or a typical solid state described type of operation. In this case so we can show the isolation activity, we're going to run this with a mixed read and write IO pattern for QoS domain to let's bring in a different app will bring in a streaming app. And it's important for us to show that you can customize how you use and deploy your protocols so we're going to run this using a custom FTL capability, a completely custom written protocol for deploying storage like a drive. In the third zone will run yet a third app and a third protocol will run a database style app. So the streaming app will do 100% read IO and this data base application running in ZNS mode will run with a 100% write IO pattern. Now notice, as I've done this, I have left in the layout definition open die that are unused. So if you get to running the demonstration will show you that we are truly isolated in our working zones. So let's move forward and show you the actual demonstration running on actual flash and hardware. This is a paused image of a running demonstration. And if you oops, let me get going here so you can see I've got my three separated zones going. The first is that block mode running in a mixed read by IO up in the upper side of the custom FTL mode running 100% read IO read is blue right is orange. And then the third zone of course is running that ZNS protocol with a 100% right. Now it's important to highlight that because we need to be able to prove to you that the IO is running independently of each other, and they're not interfering with each other. And to highlight from this layout, you can still see the flash died that have not been selected for use in this demonstration. So let me kick off the IO that's occurring. And you can see we are completely isolated from zone to zone, the IO's are not interfering with each other the tenants are completely isolated by workload rewrite in the first zone 100% read in the second zone and 100% right in the third zone. So this is a combination of IO. You can imagine with the parallelism of flash, each of these zones are running independent of each other. The tenants are not being noisy neighbors to each other and they're all running at the full speed, full set the parallelism of flash can bring. Now, one of the highlight. And this is the normal mode for running a single flash device in a rack environment, you're going to be running hundreds of devices. And they're all going to be set up in individual zones with different softwares on top of them. What this does demonstrate for you though is those different protocols can be applied and changed at workload changes in your working environment. You can change the status case storage go in and change the software driver and you can change from a typical block mode to a custom FTL mode or to a ZNM depending on the requirements of your workload. Okay, I think the next portion of our talk was going to show you how we can manage late the device level. Now, latencies come in all sorts of different forms for SSDs. There's a very high performance latency approach that unfortunately has a few outliers that can slow down the average latency. Approach is let's not necessarily run the flash at the fastest possible speed, but let's make sure that we have a consistent latency with no outliers. To look at both of these profiles, we have the same average latency between the two, but there's a different performance profile expected from these. And some people may ask which one is better. Now my chief architect likes to say this is a trick question. And it's a trick question is, because the answer is it depends. It depends on the application that's being used or deployed and how it needs to run its IO. The good news is with the software enabled flash API and technology, it gives you the programmer, the ability to control your latency outcomes as your application requires it and change those outcomes on the fly. Next to the software enabled flash API are three different queuing modes or protocols. First is a priority queue methodology. Another one is a round robin queuing mode. And the third one is a very powerful die time weighted fair queuing mode. The die time weighted fair queuing mode gives applications discrete and absolute control over die time scheduling, which allows you the developer to manage and deliver a latency outcomes, all the way down to the individual IO level. This ensures that your IO is prioritized fairly based on your users needs, you can avoid IO starvation problems all under application control. So let's get back to our animation and let's highlight that first zone where we put up to queue us domains within a single hardware device. We're going to run two different identical read workloads in both of those zones. And then we'll show you how you can adjust on the fly using software to control your latency outcomes. For zone one and zone two I have two identical sorry about that to identical read weights defined, and you can see the latency in the chart. They're both roughly equal, they're slightly offset roughly equal. Now through software we can go in and change what we expect in terms of weight for each of those zones and you can see in real time. The latency profile changes on the IO in process, we can flip it around. We can change it again and continue to give more priority to zone one the red line. This shows you that you have direct finite control over latency for every IO and workload going on and that control is accessible to you in real time through software. This is a very powerful feature of the software defined flash API. All right, the next demo. The last demo that we have to show you is the discussion on how we can offload the host resources. So the host can concentrate on doing host things running your applications, without having to worry about how to deal with low level housekeeping functions down at the flash device level. We need to reduce that impact on the data management of the flash so the host CPU and memory can go off and do the things that he needs to be doing to run the application. In this case we're going to run a garbage collections, a garbage collection example. This would also work equally well for things like where leveling or for an application level database compaction style operation. The first example of course is if we were to do all of the garbage collection activity manually, we would have to go out and identify and read in all of the data that we need to recover, collate that and write that back up to the flash device, which then allows us to go up and free up those blocks for more writing. In this case it of course takes host commands read commands write commands as well as use of host DRAM. Now a feature built into the API is something we call Nameless Copy, which is a copy offload feature. This allows us allows the host to send a single command identifying what data needs to be collected, and the device itself can go read all of that data collate and store it into flash blocks without using any additional host resources or overhead. As you can see from the animation we've reduced a set of 20 commands down to one command. This reduces that burden on the host CPU reduces the burden on the host for memory buffering, as well as reduces that PCIe bus and BME bus bandwidth. And we have a functional demonstration for you here as well running on actual hardware. So of course we have the manual copy on the left and the Nameless Copy on the right. We're going to run to identical right workloads 100% right workloads. And the workload is going to be larger than the available storage space in each of these two zones. This will force us into a mode where we have to do garbage collection to finish the workload that's been defined. So let's go ahead and kick off this test. Red of course is the right workload blue is the copy activity. And you can see there's no copy activity on the Nameless site because we're not actually moving data in and out of the host, but the host of course is having to read and then rewrite data to free up that space for the next right operation. So the manual copy operation is already completed. This test normally takes about three minutes so we've accelerated it quite significantly. Now the manual copy operation has completed. You can see that we've done a fair amount of reading and writing to move data. The results here are for the manual copy operation to complete this workload we've issued 800,000 commands and moved about three gigabytes worth of data to do this garbage collection operation. For the Nameless Copy Copy Copy offload operation, we've issued only 24 commands to do the same workload identical workload. And those 24 commands used up only 128 K of data across the bus. And if you look at the CPU utilization chart that Nameless Copy operation has done so without using so many host CPU cycles. So again, copy offload capability built into the software enabled flash API gives you tremendous capabilities to reduce host overhead for things like garbage collection, where leveling, and even application layer database level compaction There's actually quite a bit more to software enabled flash that I've been able to show you in this simple set of demonstrations. Very powerful capabilities built into the definition to handle almost every aspect of flash management and IO expectations that can come from any developer. What I would like to highlight is item number seven again, and talk a little bit about the abstraction of those flash generations, the ability to change from flash generation flash generation whether using a TLC media, or a QLC media whether using a gen four gen five or a gen six from the fan is a very important capability abstraction through the API allows us to deliver different types of flash as well as different vendors to deliver flash to the API. So you the programmer don't have to worry about those low low level differences. So that API abstraction can bring a bunch of very good benefits right software enabled flash technology is a software defined a flash approach, and that abstraction can give you a tremendous advantage in your development activities for time to market. Bring your next level of flash to market extremely rapidly without having to deal with a lot of overhead programming activities. Your resources, your resource allocation, which is your human resources can be used very efficiently. Those programmers those developers those resources can be focused on development where it matters most at the application layer, without having to deal with low level flash semantics, or changes to programming required to deploy that next generation flash. The only way is utilize and maximize the flash that you've acquired to this utmost capability. Remember, as your workloads change you can change the protocols on that flash device simply by going in and changing a software driver. You get a whole new use case for your flat through changing software. I actually fundamentally believe that software enabled flash completely redefines that relationship between the host and its solid state storage. Now I didn't mention I would talk a little bit more about the open source aspects of the API. We have available today out on GitHub. The API document itself all of the command sets are exposed already, as well as a header file showing how the command sets are going to be implemented. The API document and header file are out and available using the BSD three clause open source license. I invite everybody to go download these files and take a look at them and see how they go. Now we also know that in order for open source to work it has to be open, it has to be available, and it has to be neutral, and we of course know that we're going to need multiple vendors and multiple sources of flash. We're currently out working with the Linux community to create a project in the open source community for software enabled flash. We need this open and legally neutral forum to bring the talking technology forward. So the industry and vendors club competitors can work together to make this available to you the user. The project this community based product project will be available and out in open source in the second half of this year. And all of the sample source codes for the libraries and the drivers and the software development kit will be made available through this open source project when it is turned on. So I'd like to thank everybody for attending this webinar session. For additional information on what software enabled flash is I'd like to highlight that there is a website available with lots of useful information software enabled flash calm. There's white papers there's tech breeze. There's a lot more videos on the functionality and the capability and demonstrations of how the hardware actually works. And upon the website is also a link to the API. Thank you for sticking with me through this presentation in this webinar, and we have some time left. So I would like to open up the session for q&a. And in order to do it. Well, would like to invite my lead architect for software enabled flash to join me worry bolt worry bolt if you can unmute and come online and join me that would be fantastic. Hello Scott, and happy to answer any questions that we can in the remaining time. Okay. So we've got an anonymous question here that says to adapt this API do we need a device with some special capability, a device that exposes fine level control over device resources. And yes, in fact, software enabled flash is in fact a combination of both the software libraries and API is that we're defining as well as a new class of controllers which are not available yet but are currently under development. The good news is that that the plan is to have the software enabled flash devices available for multiple vendors. So you will have your choice of who you would like to buy this technology from. But you will require a new class of device to enjoy these benefits because, frankly, they allow capabilities that existing SSDs literally cannot do. Thank you, Rory. There's actually another question from Michael. He's asking about the relationship with rocks DB. He's asking is it just a library that uses flash or is, or is it using the API is that we've created. So with respect to the rocks DB example, we've actually taken a branch off the main rocks DB repository, and we have reworked rocks DB to talk directly and natively to the stuff API. In particular to take advantage of some huge gains that can be made in compaction and once again that source code branch is not available yet but will be it's planned to be published with with the actual SDK itself. Okay. Thank you. Another question has popped up from a Raj. He's asked, how does the abstraction work, how can we abstract different PCI generations gen four or five, or differentiating the NAND. How does that actually work. And further, how does the developer leverage that. Okay. Sorry, things are scrolling off the screen here. So that I can't see that question. Let me see. I want to make sure I, I address it correctly here. So the actual abstraction of differences, such as PCI gen four versus gen five. And because we will have classes of controllers that implement the specific different hardware standards, but all comply to the same software API so you write your software to the common API and then, depending upon the capabilities of the device, it will work with a variety of devices that applies to both PCI generations as well as flash generations. Okay, let's move on to another question. They're asking, is this, is there a flash emulator. This is an anonymous question, but is there a flash emulator that can be used to this class of device. Currently, there is an emulator that we are using in house as we're developing the hardware and so that's very common because software usually leads the hardware in terms of delivery. We do not know at this time whether or not we will be commercializing or commercialize by the wrong wrong word. Don't know if we'll be releasing the emulator or not. Yeah, I think we will have actual devices within this year so an emulator may not be necessary. Right. Yeah. Okay, another question also anonymous is, is this different or how is this different from SPDK. SPDK is a wonderful package from from Intel but SPDK is concerned primarily with traditional storage devices so SPDK back ends to classic HDDs or SSDs, whatever your storage is, and allows you to layer functionality on top of that but fundamentally you're going to be limited by the capabilities of the back end store and so for the types of advanced latency queuing outcomes that we want to provide you actually need a new class of hardware. Great. Let's see packaging hardware I think we've answered there should be hardware available by the end of this year. So it goes to be connected directly via PCIe or something that goes into a box that's accessed over a Sam. So, the answer is, it depends on that one so it is in fact, at its core a PCIe device that can be accessed directly over PCIe, or could actually be used as a internal component if you were building some type of of external type of box, or NVMeoF type box. So, you know, for a typical hyperscale application, they tend to use direct attached storage and would be accessing it directly over the PCIe bus, but there are storage vendors that produce external aggregations of storage if you will, that are also interested in this technology. Great. So, there are no more open questions at the moment. I would like to thank everybody that has joined the webinar. If more questions pop up, we'll go ahead and answer those before we sign off. But as Roy has identified this is a very interesting and new approach to using flash as flash itself rather than from the HDD paradigm. It's available and out in open source for any storage developer. I guess we do have a new question. How do you foresee this class of hardware to work with NVMe over fabric? Actually, I think it's going to work fairly transparently with NVMeoF. The first version of the API is dealing with direct attach. However, it's my personal belief, so don't construe this as any sort of product commitment, but it's my personal belief that the logical next step for this is dealing with aggregations of these devices in a networked environment. And of course, NVMeoF would be the logical choice for that. So, I'll take that as a very leading question and I gave you a very leading answer, albeit a personal opinion. Thank you, Roy. You know, I think we always envisioned the software enabled flash approach both the hardware and the software to be a building block for delivering larger and better storage type of devices. So as a building block we can deliver a traditional SSD or something completely custom, or as you've identified a NVMe over fabric approach as well. Here's a question from Jan asking if it can implement a key value store. I can go ahead and answer this one. Of course, it's all driven by software. A software driver can be put in place that would allow that key value store to be deployed as a protocol. If you think about it, that key value store could benefit tremendously from those host offload functions, imagine a key move function being done wholly within the device using that copy offload feature. Roy, do you want to expand on that? Hi, I think that's a good start. Key value stores are actually something that that we have considered from actually inception. And there are a number of possibilities. I know that that there are some internal Skunkworks projects within Kyokusha to develop a key value store that may in fact end up as an example in the SDK itself. Great. I think with that we're running low on time, and we are out of questions at this point as well. So, I'd like to pass it back to Christine on the Linux foundation for any closing activity that we need to do for housekeeping. Once again, thank you everybody for joining us. Please feel free to reach out to us with any questions that you may have. We love to pick up questions you can do that. Either through the GitHub site or through contacts available to you from the software enable flash dot com website. Once again, thank you very much. This is Scott Stetzer signing off. Thank you so much to Scott for his time today and thank you to all the participants who joined us. And as a reminder, this recording will be on the Linux foundation YouTube page later today. And we hope that you're able to join us for future webinars. Have a wonderful day. Thank you everybody.