 Okay. Nice to meet you all this afternoon. I'm come from the cyber port. Actually, in the same team as Bootslog, we are from the COW team in cyber port. Actually, we are working in the technology center in cyber port. Our role of technology center is to promote some new technologies to ICT industry, and also to drive the current adoption among the cyber port talents, incubators, and ICT startups. Actually, cyber port is 100 percent funded by the government to promote some new technology. So we have a program for our incubators or entropy law to set up their business, and we can promote the funding for them. And COW computing is one of our focus. This is our, you can say, our roadmap in the past and current that we have done on OpenStat development. Actually, starting from about 2011 around the June, we have started development on OpenStat, starting from the version of Cat-Tex and later Diablo, and in 2012, we have upgraded to access in September of 2001-2012. And this year in February, we upgraded to Fawesome, and in November, we have just launched our committee card, which is already run on Green's D version. So you can see that we are quite advanced or keep track with the OpenStat version. And mainly in our COW platform, and actually before the launch of our committee card, we have already provide a trial platform for our users. As I said, the users include our talents, incubated and some ICT startups within the cyber port. And we mainly provide two kind of services. One is IAAS and the other is S-A-A-S. And for the IAAS, I can introduce more on that later. And in IAAS, we are currently providing mainly the infrastructure include the network storage, CPU resource and all the memory to our startup. And in currently, we have about 30 projects running on our platform and over 80 virtual machines running on the platform. We are planning to migrate gradually from the trial platform to our new committee card later this year. And you can see this is our COW portal, which we have introduced our committee card. I may use a browser to show these graphs so you can see clearly. Thanks Peter. This is our COW portal, which has just launched on this Monday. And you can see, if you're interested, you can go to the website. That's about, you can see COW and being doxile portal HK. And it shows all the spot points about our COW. And this COW is already built with OpenStack. And this is our portal of our committee COW built with Greenslee. Once you log in, you can see, and this is our dashboard. We are currently provided to our users. You can see in current stage, we provide some images for them to create their virtual machines. Nowadays, we provide the Ubuntu, CentOS and Windows Server 208 and 2012. And we are trying to add more server images to this cloud platform, so more users can enjoy more. And about, apart from the OpenStack dashboard, we have do some investment on using the white scale, which is for, which is for multi-cloud management. So you can see from our dashboard, you can see there is an icon to kick to our multi-cloud management. Actually, this is already linked to the white scale website. White scale website, white scale is a rather powerful tool so users can browse to different cloud to different cloud. In our dashboard, for example, you can see there is already a, we have already drawn Amazon and web space and also our cyber cloud trial platform when the latest commit cloud is already on this white scale. What can I do on this white scale? Actually, use this white scale, I can do all things that I do in OpenStack dashboard. And apart from that, white scale can provide more features on monitoring and on even auto-scaling, which is developed under OpenStack in the heat project. But this feature is already in the white scale. For example, when I can browse the image in our commit cloud, so you can see some images here, it's already curried the API under the OpenStack and you can show the list here. And also, you can see some instance on this white scale and you can do the termination or create an image here. Actually, white scale use another format for instance or server image so we can port from one cloud to another cloud. You use a cloud called MCI. Yeah, it is different from the Q-cow to using in our OpenStack dashboard. How we can provide monitoring? You can see white scale can provide some monitoring which is integrated to the Colette D in the OpenStack. Here you can see some Amazon servers on this portal. So our users can use this single portal to manage all the servers in different cloud. Okay, I go back to the slide. So this is some features in our commit cloud which I have told you. And once SAS we use on Cal is the 3D Cal rendering service. Actually, this is a rendering farm which consists of about 10 to 20 servers in Custer so they can do some rendering functions which is a company used in the movie making. Because in the movie making, not all the shots are shoot at the scene and both of them go through some post-production to add on some others effects and on the visual effects on the pictures. So this render farm can cater the need because another focus of our sub-port or telecenter is to promote digital entertainment for the industry. So this is one of our usage in the Cal. And this service is already launched to our users even included to the public and it is a rental service now. What can the render farm does is you can see the wall picture here and after our rendering farm, you can see a colorful image here and there. Why we use a customer service to do that because the rendering process need a lot of processing power which maybe our users, if they want to build a farm in the premises, they need to do a lot of investment. But on the Cal, we can provide a festival solution. Our users need them to worry about the upgrade or the maintenance cost. We can cater the need for them. And this is the algorithm. Maybe I can show you a video so you can have more ideas on that. You can see that to different quality on the pictures, they need a lot of time, rendering time, say for this kind of quality in this 75 hours in a single machine. By using our Cal, we can solve this problem for them and make it more efficient. Actually, this Cal platform, sorry, this is the interface of our Cal platform. It is already put on a website so our users can keep the link here and use our tools. And this platform is already for a competition for local school starting from last year. There is a 3D animation competition annually. This year between August and October each year. And this platform is a free resource for them to do for the students to do some rendering on it. And this is the winning team last year. So I may introduce one more feature on the white scale that is the Cal management tools. As I said, there's a function on this to do some auto scaling. Actually on white scale, our users can conflict these features using some steps, just some simple steps. Like they have to create a server array to define it and also they have to set some alerts which is some decision threshold to determine whether in working condition to scale up or scale down. And in this case, I have set a CPU idle value that is below 30% for three minutes. It will do the scale up or scale down subsequently accordingly. And you can see that there is a crime time which is maybe the traffic maybe search to a threshold but may not be stable. But just a sudden rise on the resource but it may not be a whole picture. So I can let it to monitor for a certain period before making the decision. So on the interface, I have just to input some parameter here so I can do the job. But actually some other mechanism that may not be using the user interface on the white scale, I need to do some scripting on that. And white scale already provide these tools for the programming tools on the white scale. And on the management platform also we have some server template so our users can use some already made white scale image on that. So any question about that or about cloud service? Yes, you mean the 3D cloud service? You mean? How can you treat GPUs? Because the rendering process requires GPUs, right? Right. How can you treat GPUs? In your scaling process, there is no mention about GPUs so I am asking about how you treat GPUs. Actually for the 3D cloud service we are doing some integration on that but it's already, it's still in progress that we are sorting some problems between the integration between the 3D cloud and the white scale so the integration will be in our next phase. In current phase we are doing the IAS, mainly on IAS. And this, and the 3D cloud is implemented on our trial platform in this stage. And the white scale in this course, multi-cloud management is on our community cloud. So we are still doing some integration between the 3D cloud and the community cloud. Yeah. Thank you. So we are... In the typical desktop environment, we use GPU to display the graphic instantly to the screen so that's why you need a very powerful display card. We use the GPU on the display card but in a transitional render farm or in the infrastructure we use CPU because when you're trying that, GPU is not really mature on the cloud environment because you know, Envy that has already deployed one of the solution in case it's an appliance so that they can use to utilize the GPU to become a part of the CPU rendering service. But we test it before because once we use the GPU to utilize that one on the same server or KVM, so they will got snuck because they take the screen, they utilize all the GPU and then become, just take one core of the GPU to become the CPU but it's not stable. So that's why, but they have solution on the market. There's Envy that is one of the solution but it's render login so far. So it's not open source. And we have to rely on their appliance so you have to buy a specific hardware specific for the cloud service. So we can wait. So that one. Thank you. Grip's the version, right? Which version you already deployed? Any experience to share regarding the ping for the upgrade? Yeah, it's ping for it. But you know for the previous version, right? It's really somehow really cooked thing, really scripting, right? No, any automated. Yeah. Okay. Okay. In KVM, we started on the 2011 May, we start that service and provide the child service to our incubator. But when you try to upgrade it, as you know the KVM is not all that much ready. Like some of the command, very simple function, you have to do it on the back end. You have to do the command line service like associate IP. So you cannot do the associate IP on the dashboard. So after we upgrade to the ARBO, the ARBO is one of the kind of the looks good portal in the horizon, but we experiment and see that a lot of features not really stable. So we jump that, we skip that version and then we go straight to the access. And when we start on the access, we find there's a lot of service. Like the key zone is one of the important new features because they use the single-side on. So from Catastase, they have load that concept. So that's why we have totally to reinstall everything. And then we upgrade it. But luckily we have machine, we have a lot of machine. So we start some machine to install the controller first and then we might get one by one on the instance. So this is the shipping time. And then we start from the access to Fawesome, we do the same things. Because in Fawesome, in access to Fawesome, one of the new features is what is the sender. So the false storage, because in the previous version, they use the low-file volume and the Fawesome one is using sender. So when we are upgraded and we have to preserve the user status from access to Fawesome. So, but they have different project. Yes, Fawesome is the low-file volume. In Fawesome, they're using sender. So what we have to do is to do it from the back end. We have to do command line to extract their volume data and they save it and then put it back to the little, migrate to the Fawesome sender. So we have to, this kind of the upgrade we have to do, everything we have to do from the back end. You cannot do it from the funnel on the portal. So this is most, this is a headache because you take times doing that. But luckily there's not too many big volume data on the migration process. So we have to one by one. How about the Fawesome to Gretzly and Edea? Which one? Fawesome to Gretzly. Oh, the Gretzly Fawesome, yes, exactly. This is painful as well. Because in Fawesome, from Fawesome to the Gretzly, they have a new project as well. This is the SDM one, the quantum, now renamed to neutral, right? So in Fawesome, we're using low file level and in Gretzly, we use the quantum. So we have to reconfigure everything like, what we have to do in Fawesome, we use the low file level, the parameter reset to the VLAN, VLAN tacking. So what we have to do is, it's difficult reset the switch in VLAN mode. So this is a trouble. So we have to do a lot of work on the physical switch, which is a physical switch. But in the Gretzly one, we use the SDM function, the leutron, so they don't rely on the switch. They just go for it. So much simple. So what we do is reset the switch back to normal VLAN. So much easier, but we have to do some of the work on resetting some half here. But much easier to manage, separately, separately. Have to separate it, yes. Exactly, separate. Because the design of OpenStack currently can locate the in-pace upgrade, even maybe the database inside the structure is different. Yeah, exactly. The schema of the database, because you have a lot of the database, the schema of the parameter is quite different. One of the things called UID and another one is the UUID. So we have to change the whole, the table schema. Okay. So we, in each upgrade, currently we need to prepare both environment. Previous version and the next version and do the migration for each project one by one. And see what time time is our goal in next migration. Okay. Thank you. Thanks.