 Thank you very much for joining us, some late in this session. Also their honor to be here with you, the technical experts. So this is very application oriented. I'm here to show what we have done in terms of security, in terms of a safe multi-party computing. So I will take 15 minutes and I will speed up. So about this safety computing. This is a very new technique and the main pain points to deal with is the data silo. This is the holistic problem, especially lots of data sources are controlled by different institutions. So if they integrate the data together, then the consolidated computing could play a big role. And in this process, we all have needs for data protection, including privacy or confidentiality of business. So on this framework, we start to consider any innovative ways to take to use the data but to make it invisible. My colleague also discussed before the safe computing has this kind of a roadmap of technology basically covering two parts. One is based on TE, but the hardware on this basis we have the software. Another is algorithm encryption. What we take is based on the hardware, the trust is secure on cliff and the software based path. To put it simply, within the hardware there is the space and several features you can install the models after signature. So data owners have rich consensus and we will not use this as a model for rating. Second, the data have been encrypted with a public key. So after encryption, we will have the calculation and then output the results. So from the technology perspective, the owner of this machine cannot see the original data. We can take this method or this rationale to use the data but make it invisible. If we use the original TE environment for development, now we have lots of difficulties. Let's take this as an example. We need EDL language to rewrite the model. We may not target mainstream users including banks, so this kind of development has very high cost. What we should do, we make the cost to be high level SDKs to directly render some Python machine learning models in this environment. From the communication perspective, an alternative rationale can be realized mainly through algorithm encryption. Recently, especially over the past year, Google started to launch federated learning framework in 2017. Also targets the same problem to develop solutions. So we selected TE-based method, mainly considering that this method has higher performance because it supports different types, more types of models. Now the encrypted linear support polynomial. So on the basis of the TE, the X-Bases rendered model can range the latency within 100 milliseconds. In lots of scenarios this can be applied. If you want to try this and the applications accordingly, but you have very simple scenarios, the NPC has advantages because no need for hardware or TE environment. So no need absolutely for a black box concept. And to some degree this is very transparent. In this technology stack we also have done something by integrating a blockchain. Two problems. One is to use it, but now see it also with the coordination of the different parties. Which parties calculate. So credential and tracing are very important. Also we have needs for ledger and we make it a two layered structure. The clients can choose to use one module or both modules. We think that these two can be connected in most deployments. We make it a cryo-logistic to support different bottom chains. Now in our current deployment, our early clients mainly financial institutions, so integrated with fabric. In terms of evolution of technology and acceptance, I don't think China has very quick speed. If you have some knowledge about TE, you would know most scenarios are the management of keys and protection of data. This is also very new. So after 2016-17 we started to have lots of startups to try to this direction with the large steel companies. And in March this year the PBOC calls for the integration of this safe computing into very important direction. And in joining the same level as big data and AI. This technology can solve the problems or the needs for the banks. Because the bank data needs to be integrated with the data from the other sources for risk control. To be more specific in terms of the framework. In addition to the TE SDK and blockchain itself. The other modules are also situated on the framework in the transmission process. We need a channel for transmitting data for data. In this framework you can imagine normally the data is stored locally on the data owner. So it needs the registry service and search engine. And the system needs to be deployed in the data operation council. Integrated with processing cleaning and business system. Specifically this technology can be used as a very normal technology. We are communicating with a hyper lever. And our client want us to introduce the early adoptions. Because of the appealing of regulations. The early stage users are risk control from the banking system. We have lots of projects in this year. Including small and medium sized companies get honest. Relating to government data source including taxation. The banking independent proprietary data source. Logistics data etc. By means of the holistic modeling from different parties. We can deploy this in the safe computing environment. For the SMEs that difficult to be evaluated. We can finish evaluation of this environment. In addition this scenario. Verso for mining. This is a marketing scenario in banking system. Banks have their own data. And with the millions of seemingly same customers. How to launch the integrated algorithm. Or computing with the airline data. And how to make promotions to this targeted customers. And the Silicon Valley also have such projects. For the promotions and marketing. The potential directions for applications. Cover the engine of advertisement and ad placement. The ads purchasing data. If we can take this way to work as a training model. Then the placement accuracy can be improved a lot. And for the advertisements. They do not need to expose the actual results. So in China. In the financial system we may discover these needs. And I believe most of you here. Would be working in the Internet sector. With the higher price of the traffic. We may need to apply this more in the future. And for insurance. They are interested in this application as well. Which could help them to avoid the long insurance fraud. Sometimes the users may not rate the policy very carefully. In China the long frauds happen frequently. And since January. It is discovered that a person bought the life insurance. Up to more than 100 million. At more than 20 insurance companies. And later the insurance companies just discovered this. Through their WeChat group chatting. And they do not need to disclose any user information. And at the same time they could in advance. Predict the risks. And they could also check. If the new user had any suspect of insurance fraud. In 30 days. In previous 30 days. And they could use this to prevent the long insurance fraud. And this is what we are going to do more with insurance companies. And we also have some invitations from some AI companies. Especially some companies. Serving the financial system. And they could deploy their models locally. But they do not want to hand the models to their user. And as I previously mentioned. In the tea environment. Which could protect the data. And at the same time it could also install the model. Into the client's tea environment. And it could output the AOC value and other indicators. But you do not need to hand over the model itself to the customer. So that you could protect the code. And protect the model. And because it is a multi-party calculation system. And the data is like a few in the system. So there are more and more data sources. From the government. And from the third parties. Who are very interested in this new type of data collaboration. So now online. We would have the data entrusted. And more than 20 companies. Financial and non-financial have different data assets. And they want to enlarge this network. To make its results better. To scale it up. And we also welcome companies who are interested to collaborate with us. And hopefully my sharing could offer you a new perspective. Because our team thinks secure computing would be a very interesting new direction. And welcome to join us. Or to talk about some possible opportunity to collaborate with us. I have several questions. You mentioned you have large volume of data entrusted on the network. How do you name it? We call it points network for SDK. These data sources. Of course it is a real-time permission. It could have the code computing with other data sources and clients. But now it is an SGS which is a concentrated deployment. Oil computing is done in SGS. And where is it deployed? The SGS cluster. It depends on the scenario. For the technology the SGS wherever it is deployed the data could not be seen. And now the deployment of SGS cluster now is at the banker's end. But it will be deployed to other third-party financial institutions. So it is not like a connected data market. It is still case-by-case solution. Yes. Yes. And we will also inform the clients which could be accessible. But it is not automated for direct computing. And then for the insurance scenario is it already implemented? We have some POC together with Antifrode Alliance to promote this to the member companies. And you mentioned your SGS working together with the bank is deployed at the bank. But the bank's data could not be out. The banks think their data could be out. But these projects are in period. So for the first period it is still deployed at the bank. So for the bank. Is it from the bank's perspective or depends on the regulator? Actually it is a two-part question for the financial institution and when they are working together with the government. If it is with key encryption and enter into other T, it is acceptable. It is on the bank side and on the regulator side. Now the MIIT is making relevant standards. And the PBOC also recognize this action. And from the regulator to the institution to trickle down it requires six to 12 months. SGS is Intel technology. Will this bring any problem in the financial scenario? Actually SGS is. But T is not Intel technology. ARM also owns this technology. And NVIDIA, AMD all have this technology. And for TEE some early research was sponsored by Huawei. But only SGS is the only one that is available. Others have their problems respectively. Yes, this is an issue. But do you think the banks should change all their machines with Intel chips? If in the future they change it all to Tizen server? Because for the Huawei N8 and it requires some time to test that. And two more technical questions. In your SGS it could run Python as you said. Do you use Rust SDK integrated with the fabric functions to run it? To run the Python or what? Because the original SDK is C++. Well, maybe you can talk about this question with our architect. But we're not using Rust because Rust is two bottom layer language. And it requires a high effort for development. But if you want to run Python, you need to move all the Python to the SGS. It is quite technically demanding. Yes, and we have tried every effort to find the technical giants. And maybe later you can talk together with the SGS architect. You also mentioned an XGBoot. Is it a standalone? Yes. We haven't run any cluster one. Another question is like you have a decentralized computing in your architecture. What does it mean? Well, it means version ledger keeping. It might not be a very good translation. It should be in English. It is a blockchain. Okay, that's all for me. Well, I think you are using fabric to make a trusted safe environment. Is it with hardware or what? You mean fabric itself? In your architecture there is a multi-party trusted environment on the left. We have hardware and software. Hardware, SGS. What is the full name? Intel's SGS. Safeguard extension. But of course as I communicated with this colleague, we have experiences from other technologies. But SGS is the most sophisticated now. But of course it has some other software encryption thing. But concerning your question, it is based on hardware. And what is the difference with KMS? KMS is mainly for the passcode. But this is in TE. It will conduct some computation. For example, the operation of a model would be done on SGS. Because at the beginning TE, SGS would be about fingerprint password etc. In helping AI protect the model scenario, it seems there is only trusted environment, no blockchain. And for one bank client, there is no blockchain, just TE. You mean pure protection of a model? Yes, it doesn't use fabric or blockchain. Well, I think for model protection it doesn't require blockchain. But if the model you are going to protect, it's variable is like from multiple data sources. It may need fabric to deal with each model's contribution. But if you want to define strictly which is protecting the model, then it is TE instead of blockchain. But if it is multiple data sources, does it also require in the scenario available but hidden? What? Who? The model company or the bank, the data sources, because your user is a bank, and if you collect data from other places for AI analytics, do you need the data source to be what? These are not contradictory. If you use the models in TE for calculation for computation, then the data source itself is available but hidden, and the model as well. But it also depends on the model's specific requirements. More and more data sources do not want to disclose their information explicitly. What's your favorite version now? Now it is 1.5. It should be 1.4. 1.5. But you know in our scenario, it doesn't need very high level fabric. You mean for performance test for fabric or what? No, in those scenarios like for the banking, insurance, etc. Well, let me explain. Well, in performance we have blockchain, performance if it is for blockchain. Because in the now scenario, the TPS is 300, it will not hit the blockchain bottleneck, and that is why we choose this direction now. And then the second part is about safe computation. It is a very complex issue. Now we are mainly trying to identify the type it supports, and also about the SGX boot and latency rate. Now we are going to launch some more comprehensive tests to see where is its limit, how many models it could run, and maybe by August we will have more comprehensive tests. Now it is mainly based on what model the customer wants to deploy. It is also based on a use case. If you have a case and you deploy the fabric, or you run all the cases in the same fabric amount, I think now it is more like for the local network. If you want to have safe computation, you have your own framework. You have a chain code and then it is done. Yes. In the future we want it to be so. Now we are trying to build it. Okay, thank you. Thank you.