 So it's about dinner time, so many of the audiences have left. It's okay. So we just followed the direction. I'm Zeng Shufang from Norway. Me and my colleagues are in Intel. We're going to present this project to you. So I'm going to introduce what we have done. Wang Qing is from the Intel, and he is also the responsible person in Greater China Region, and he is going to talk about the conditions of the projects afterwards. So we start with what is sold out. For me, I think it's a data-managed platform. No matter it is smarts, autonomous, etc., they are all additives. Why we need this data-managed platform? Because based on current editions, the ID facilities of companies, there are different types of them, like containers, like compounds or edge computing, etc., for the cells. There are other storage provided by different manufacturers, and also provided by multiple producers in between. We want to connect the north and the south, and in the middle, there are many data-managed platforms or software, for example, live or event, for companies, for monitoring, and also for resource distribution. We want to do this because we want to coordinate and attribute a data-managed platform so that a user can use them easily. So kind of one of the motivations we have. And then about the architecture. So first, with the move found, there is a plug-in. It's for the interaction with Kubernetes and other traditional applications. In the middle, the orange parts is mainly for traditional data management. For example, for the module or documents distribution, we can mount it, we can apply it, etc., for the purple parts. It's for the intelligence monitoring. We monitor the information of the device, and based on the information, we can make intelligent predictions. It's newly added here. Then for the red parts, there are some basic modules providing basic functions for other modules. And for the green parts, it is for connecting different storage devices, and the blue parts is for connecting different cloud platforms. In terms of mounting clouds, we have the object storage management, and we have our project associate, as well as the move found projects, and then the solve found, and then in between for the multi-cloud projects. So just now we talked about the architecture. Then I have some functions to share with you. To show you what it is, the first is the data provisioning, and we have the ATA volumes mapping, loading, etc., we support. And the right parts, there is the file sharing, and the creation. And you can see, we support this, and the mapping, and also the drivers. In addition to our basic functions, we have the orchestration. How do we orchestration work? Well, we can example this that we have a sheet we want to mount a file, then watch what we're doing, we need to create a file, and then mount it to get information, to get information and then load the file. Basically, we have all the focus for achieving this. It's possible to do it manually, but it's inconvenient by orchestration. All the basic functions or operations can be arranged into a service and then when we bring it to the same scenario, we can run this. We can do it manually, and work with the ATA, because at the backstage, we are using these types of but of course we, it is plug-in to a different service, so we can use different drivers. Actually, the input and actions are taken out of here, so if you use the port, you know that the work flow is self-defined with a series of actions in the work flows after the deployment. On the top, we need a service when you run the service. We are using backend to run those series of actions. On the basis of data service, we have made some intelligent actions. This is the data operation about the purple parts, which is intelligent monitoring during the dark, and with some connectors to four different platforms, and we can get information from them. So we can do it manually. We can do it manually. We can do it manually. We can do it manually. We can get information from different types of jobs by using the project we can send it to different works of work, for example, we can run virtual machines and virtual machines and combine with Kibana. Then we can present information visually and also we can go through healthcare to healthcare to have a basic machine to study virtual machines. After getting information from Kafka, they can make predictions. At the right-button unit, there is the DemiX scan. It is the IOPS and you can get a prediction. In addition to premium fields and Kafka, other properties of platforms can be used, but we need a corresponding converter and motor. After connecting it, we need to do something else. And sorry about the API. Since we say it is for monitoring data, the API is for providing the interface to trigger the incident from happening. And then the data migration. So it's a very simple graph to briefly tell you what it is. In order to migrate data, there is a big base. We are now migrating objects from a different route. In such a scenario, the basic requirement is that we want a unified interface so that users can connect to different clouds. There are a lot like Huawei's service or other services after connecting them. Users will have an API so that users don't have to worry about the going through the unified API. And then we have the migration based on strategy and the lifecycle. Such migration is comparatively simple because users can design their products so that they can decide where the data should go to. The soda will allocate and create such a mission then the data can be migrated to designate objects. Another is a complex. It involves a different scenario so that users can cloud the lifecycle. This is so for the object you may have used in your object based on the cloud after the data is generated over time the data will be degrade then we need to migrate the data to another one after the data is expired you need to delete it. That is a typical lifecycle for the optical cloud it has its own lifecycle management for example you define the lifecycle when to migrate the data and migrate the data when to delete the data then they will do the migration for you for some public cloud they have to each of them has different interface so if you use different cloud of multiple cloud you should be adapted to the interfaces it is a complex when we use data now we have security requirements to prevent the under bound things so we may use multiple clouds at the price so we do not use only one cloud we use multiple clouds so we may have the issue that is adapted to the action the second issue is about the data migration maybe after three decades when the data from a standard storage to another one maybe after 90 days I need to delete it it can only be done within the cloud cannot be done across clouds so the lifecycle carries on these two issues we provide the same data when user use it through the sold out interface to define the rules of lifecycle no matter what cloud is used I have just set the rules so the schedule is done by the sold out without concerning the backup when it supports across the cloud it is on the CF S3 but after a while you want to migrate migrate to 8x then archive it to S3 delete it you can store it on AWS and after a while there is a reduced price then you can set the rules after many days it should be deleted you can do not do that so this is the lifecycle it is also not worthy to say that they have the default definition of the storage layer we also have the definition of the default the code the order and the code we can support the user to define one storage layer corresponding to the layer of the public cloud after the definition we can do the data management for you this is the data migration I just selected some of the features to show you it actually has much more features if you are interested in them you can go to the links as shown here to get more details OpenSDS you can find my T-shirt on OpenSDS the the user sold them for this presentation maybe in the future we can change the name there is some contact information if you are interested in it you can go to the website to search OpenSDS you can get the relevant information I leave that to you from my website now I give the floor to you Mr. Wang Qing to introduce the community my name is Wang Qing I'm from Intel my work is mainly about the network and storage I don't know how many of you have the background of storage for the network at the beginning we had the concept of SDM that means if if by different switches from different vendors for networking then the amount of interfaces or different interconnectivity is not easy that's why there is a software different network here I think the project is also similar to this the data is flowing and in the future we will see more big data and now I'm also getting edge computing it's not only decorated by Wang Wenda actually it involves the different the cloud providers and vendors so the integration communication is very important in other countries they have different the cloud providers and operators so for the cloud infrastructure service in the past you have selected the storage model maybe you use the private the storage the paid the paid storage but now you can use open source storage so there is the issue of legacy of storage how can you migrate the data from the old storage system to the new system OpenSDS can provide the platform for users to have the unified interface no matter what the storage is used or no matter what the public cloud is this is transparent this project is the massive project requiring the support from different vendors for the open stack you may think it's quite similar but my personal understanding is yesterday all similar but for OpenS the stack supports the Kubernetes but they are not only two forms of the cloud not only two cloud solutions different companies for example AWS have different solutions of the cloud and DStack are many on many counts if you use different models or different cloud solutions then the data storage management and processing have the issue how to do the connectivity so for the OpenSDS project we want to have a more universal scope which is named into a sort of program the first slide shows you the definitions of the model and also a simple data autonomy we hope we can provide a unified or harmonized storage platform and the data management platform it means that in the open source storage there are different solutions to concept and also some clients even in the conference the colleagues from jt.com introduced their solution it's the container oriented under the China and also the China Unicom has the open source solution so different companies have their own solutions but for Linux foundation it's networking it's an open source networking opportunity but there is no this kind of unified platform for storage that's why we want to establish such a program to be a unified platform and also for integration of different storage systems in an interconnected way to meet the flow and the data flow demand and for the integration it's very important to have such a platform to prevent the event lock in that is the mission and statement of this program for the project goals we have three in total this project is still in the process of brewing it has not taken shape these are the three major objectives actually it's more than these three these are the basic project one is to be open that means or that is the essence of the open source software community so the program is open to all the users we should prevent the event lock in for the codes and the programs are open and visible also we hope it's real for the projects usually we have pre-storming and assumptions but for this procedure or the program we hope that it can resolve the real problems and imports for example across cloud management and edge computing systems the issue and the data management issue and what objective is that it's ready for use it's ready that means the storage system of client can be a hybrid solution can be from different hardware and software vendors and I really like making comparison for the network the vendors used to say that the concept of decoupling hardware, software and the applications should be decoupled they are from different vendors hardware can be from the software can be or the platform of the software can be the red hat different vendors different vendors infrastructures and the infrastructure or the functions functions can be also different telecoms so these are all the three layer decoupling and for the storage should also have the decoupling model the storage systems in the future you can use OBS Intel and OBS system and the software can be SF storage systems so different systems can be integrated and compatible to provide to the users with a good solution sometimes there is a legacy issue sometimes it's because of budget but the ultimate goal is to integrate all these systems in a compatible way for the sold out program what is the deliverable one of the deliverables is the platform itself at the level of control it's open SDS and the plugins are from different vendors they should be integrated into the platform and know that deliverable is a set of standards or criteria because different vendors take part in the community so we should have a set of the standards and then the certification to see whether you are qualified for that if you meet that if you get certified and then if there is a third party or new entrant then there is no issue in terms of communication and data management so these are the deliverables also we want to build a community with the Linux foundation it can be participated by developers, operators users and trainers so these are the four deliverables we hope we can purchase the deliverables to users from a sold out project so far the major participants in the ecosystem we already have Intel IBM, Fujitsu and Huawei they are the key vendors in the sold out ecosystem and also WDHPE they are also the vendors in the ecosystem you can see the full list of the participants in the sold out program the second principle of objective is to really resolve the pain points of customers so we actively connect with those manufacturers to see what they demand for example, we connect with the KBM from Europe and also Yahoo Japan third representatives of Yahoo Japan also come here today today and also in China, we have connection with China Unicom they will feedback their demand in this community and China UNICOM also signed the agreement to become a member of the end user, we also connect with the ICBC and other main banks we are in the process of negotiation and there are also two big operators, for example, like telecom and telemobile we are negotiating with them we believe that we have found a cloud service provider like JD.com who are very interested in this project they also provide their technology with us hoping that the Soda can connect to their CFS system about the benefits of membership, for example, you can have financial support and you will have suggestions and guidance on how to drive the project and there is some tailor development for projects and those projects will be turned into open source like the teleunicom they have some projects they want to make it open source but they don't have the needed resources so they want users to use their system so with such a platform targeted at storage it will help them to promote their technology and products because many users using these products are inside the community we have a man-made fabric it is a technology of Intel we want to promote it that's why we actively participate in the community because we found the Yellow Japan CSP from Japan they are interested in this so probably we can work in the future to see how MME fabric probably can help them somehow in the future so by joining this community you will be able to promote your own projects new members or third parties will get opportunities to promote their own products and we're going to provide a recognition to the company telling you that you are the active user in the community because we believe they want to improve their influence in the community especially the storage community thirdly we will also help them design some strategies to build some new strategies like how to operate how to run the whole project thirdly and fourthly there is a joint POC which is very important we have new technologies, new open source projects how can we get in touch with our users another example is that CFS also want to be used by more users so they want to use the platform to promote their product the fifth is that you will be able to grow with the community even though the community looks small now if more storage providers can join the community then we will see a more prosperous future in the future starting from 2016 the community has been growing and now in 2019 we are releasing the third version and we are welcoming users we believe that the projects belongs to certain companies can grow together with the community by joining Soda there will be some expectations for members and users they want to participate in a technical project they want to advocate for the project and want to make contributions to the upstream projects it is now limited to SDS community and also the EEC and other projects all of them we can call it as we hope that from the perspective of user we want to see some feedback whether there is new challenges whether we need to improve and if we fix the problems we hope that these can reach our Toyota can reach Yahoo Japan and get more feedback so this is the expectation for members and users and about the governance model I want to talk about the blue parts so the I want to talk about the purple parts first it is the new parts OC is the outreach community I was in the China community they selected me as the contact points so I will get in touch with the large operators banks and all kind of CSP to promote it and also now included here we have community for North America, for Europe for South America, etc and also for Japan we have many members and the yellow parts is the end user because we have community of end user for large projects we get feedback from them so that the challenges etc can be provided inside the community that the contributors can help solve the problems for the green parts is the TSC technical steering committee Intel also have members in TSC if I remember correctly IBM, Huawei and Fujifilm they all have representatives in TSC joining the discussion for specific projects like Havasushi or CFS all kinds of different projects we have the PTL the leader responsible for projects so this is the governance model we proposed we hope to achieve such a goal in Chinese community half a year ago the community was established and we have set up regular meeting so in every two weeks we will have a meeting starting at 3 o'clock TSC also have a meeting for every week our community was just established half a year ago we already have many members joining us in the discussion that we will also organizing meetups in last October in Shenzhen, Shanghai so that we can hear from them so we will take advantage of the Chinese community to organize different activities from different projects from different parties where the SODA can be put under the next validation and can make contribution in a way of the fees of membership and also the engineering resources there will be two levels of membership for end users government entities or non-profits organizations they can draw for free as associate members they might not properly provide codes but offering PNACs so briefly that's the end of my speech if so I will put it here so if you are interested in our community or SODA or if you have any product related to storage you can contact us via the information here probably we can do things together and make some achievements thank you