 Hello, everyone. My name is Yaowei Bai. My work of main focus is on virtualization, distribute the storage, and links kernel. My topic is bring SCSI support into QMU block layer. Okay. This topic consists of these several parts. First, I'll introduce some background and then the solution to the problems and how to implement the solution. Finally, we all see this our current work and future plan. Okay. Let's see the background. Some cloud-setting services like OCFS, MSCS, the shared disk space concurrency control mechanisms. The concurrency control mechanisms are usually implemented via block layer protocols like SCSI. And the shared disks are supplied by distributed block storage and SCSI. Here is the architecture. We can see that the cloud services look at the QMU guests and then access the SCSI cluster through the SCSI protocol. We can see that there are SCSI initiator and SCSI target category in the middle of the outpass. This architecture has some problems. The first one is the long outpass. We can see that there are several components in the outpass. And these components are in nature. It's hard to maintain. Okay. Let's see our solution. We know that the QMU can access the SCSI cluster directly. So the solution is simple. We just drop the SCSI part and turn the architecture into that in the red picture. That is red. Let the QMU access the SCSI cluster directly. But there are some works to do. The first one is SCSI support in SCSI. This work has been done. It consists of compare and write and persistent reservation support. Compare and write part has been upstreamed. Where the PR part is still private. The next one is SCSI support in QMU. This part is still missed. So our work is to implement in QMU. Let's see how to implement it. This work consists of four parts. There are SCSI device emulation block layer interface block IO pass interface and local driver interface. We will see them one by one. Okay. The first one is SCSI device emulation. Look is in SCSI disk and the work is just add to emulation API compare and write and persistent reservation. This part is quite simple. The next one is block layer interface. The quarter is in the block IO pass interface. This part is in the block backend. This part we reuse block IOP write way for compare and write and add a new block persistent reserve in our check for the PR part. The second one is block IO pass interface. This one is quite similar to the upper one. We reuse block driver P write way for compare and write and add new block driver persistent reserve in our check 3 API for PR and it's called look is in IODNC. The last one is block driver interface. We add four new EPI or block driver IO compare and write for compare and write and block driver persistent reserve in our check for PR and it's called in block RBD. Okay. Now we can see the start of this work. Currently it's online in our cloud services and it is functioning normally. The patch set of compare and write part has been sent out. You can find it through this link. Let's see the plan of this work. The work is upstream in the self community we will upstream the persistent reservation supporter into self community and in team community we will upstream write steam support. Write steam has been supported in self community and the persistent reservation support into community. That's all. Thanks.