 which is really very much in line with the open source and I've seen at previous conferences the iteration of this work which again is very much with open source so I'll pass it over to the speaking team. I am Midyong Kang from USCISI. This fair mail open source project is being worked by USCISI, Entity Dokumo, and Virtual Tech Japan. First of all, I'm going to introduce overview of general fair mail provisioning framework including wide fair mail provisioning, open stack fair mail history, and fair mail provisioning framework and release plan after that Entity Dokumo will present more technical stuff for fair mail provisioning support more. The open stack cloud operating system enables providers to offer on-demand computing resources to provisioning and managing large networks of virtual machines. Virtualization plays an important role in NOVA to support agility and flexibility but there is a significant performance penalty for response time and context switch on live servers compared to biometric servers. And also some non x86 based ocean architectures, either poor or non existence support of virtualization. And also some users want to use fair mail machine itself without virtualization using open stack. So to support real time analysis with low overhead and to support various CPU type and also to support fair mail resources itself without virtualization using open stack we had to focus on general fair mail provisioning framework. There is a big difference between fair mail and virtual machine. In case of virtual machine, there is hyper bijects between physical resources and virtual machines but in case of virtual machines, there is no hyper bijects so fair mail machine can access physical resources freely so it needs to achieve same security level as virtual environment. Also in case of virtual machines, one NOVA compute is dedicated for one physical machine so many virtualized instances can be running on that system. In case of fair mail system, one NOVA compute node should be dedicated for multiple fair mail machines. So NOVA compute node should aggregate multiple fair mail resources. And then those capability information should be sent to the scheduler. Then based on that obligated capability information, NOVA scheduler schedules fair mail instances to the fair mail NOVA compute node. In NOVA upstream code point of view, ISI start initiated general fair mail framework but at the time ISI targeted one system type of HPC cloud that is non-pixelated multi-core fair mail machines. After that, from the last Foresome Design Summit, ISI and Entity DoCoMo started to collaborate to extend SSEC fair mail provisioning for supporting pixie and also independent fair mail bicycle database. Then for some release, only telera non-pixie machine image provisioning was considered. So telera system could provision telera fair mail machines using TFP or NFS system and using PDU power distribution unit power management method. So finally in grizzly version, to support pixie or non-pixie machine and also to support PDU and IPMI scheme, we organized general fair mail framework as shown in the slide. First of all, compute driver should be set as fair mail driver like liver driver. And also fair mail driver can be set as pixie or non-pixie. Also power manager should be set as IPMI or PDU. Also using instance type extra spec, CPU type also can be set as x8664, i64 and so on. Based on the fair mail hypervisor type and also CPU type, NOVA scheduler schedules fair mail instances to NOVA, fair mail NOVA compute node. And also each NOVA compute node should have many one homogeneous fair mail farm. So like pixie x8664 fair mail farm, pixie arm farm or non-pixie telera fair mail farm. Also in ESSEC and Full Sun release to avoid touching NOVA mainstream code at all, we were using only simple text file describing fair mail load information. And also those homogeneous capability information is sent by NOVA compute node, and then Grizzly version. In Grizzly version, this text file replaced by dependent fair mail MySQL database is separated from main NOVA database. And also homogeneous capability information is being changed by multiple capability information. But scheduler point of view should change multiple capability information. NOVA scheduler side and NOVA compute side should be changed a lot. So it's being discussed between those two methods, both capability information or homogeneous capability information. It will be determined in tomorrow's session for Grizzly Merzy, including fair mail driver with pixie and non-pixie stuff, and fair mail MySQL database, and fair mail deployment script and management script, and also fair mail scheduler side by Grizzly First deadline. So if that Merzy is smoothly proceeded, the next target will be Florence issue and security enhancement network isolation and also scalability issues by Grizzly Third deadline. That's all I have, and then Ken will present more technical stuff. Presentation. Let me introduce our team. My name is Ken Garash from entity.com, and I'm leading an OpenStack project in entity.com. Myself and Arata are one of the contributors for bare-metal provisioning, especially for x86. And today, there is a hero and a manor, and they are working for Xerbix and OpenStack integration. And she will show some demos during the presentation. So as MySQL has already mentioned, one of the motivations of the project is a performance improvement. We actually love virtualization because we can get agility and flexibility easily, but still we need to pay some performance penalty compared to bare-metal. For example, context switch and throughput and delay. So there is a huge gap compared to bare-metal, and we know if we can use hardware accelerator like SR-IoB, then the performance will be improved, but then we lose flexibility of virtualization. Like we cannot do migration, and we also need to install driver to the guest operating system. So finally, we decided just use bare-metal machine. So before going into the detail, let's review our current VM provisioning procedure in Nova. First, I use a request instance, and once the request reaches a Nova scheduler, then the Nova scheduler chooses appropriate Nova compute based on instance type, and then the instance is provisioning one of the Nova compute. And if user want, you can create a network among tenancies. Also, if you want, you can attach a Nova volume to one of VM. And finally, you get access to the VM through SSH or VNC. And also, if you want, you can create new machine image from the running instance. So here is the necessary functions. We mean bare-metal provisioning must support. So we must support those seven functions, but the challenge is there is no hypervisor. So in general, in bare-metal provisioning, we need to achieve all those functions without using hypervisor. This is the challenge of the provisioning. Okay, for the instance request and the scheduler, MQ has already mentioned, so I'll explain detail about the image provisioning in x86. So first, we need a preparation. For the preparation phase, first, we need to create a kernel and RAM disk. For that purpose, we provide bare-metal MQ init-rd script and using this script, you can create a kernel and RAM disk just for deployment. And you need to register those to the grants. And second, you need to run several servers on top of Nova Compute. First one is a PXC server and another is a VM deployment server. Also, this is provided by Dogomo. And third, you need to configure Nova Conf. And this first line, you need to indicate the Nova Compute act as a bare-metal provisioning server. Also, we specify we use a PXC for the deployment and IPMI for the power management, turn zone and turn off bare-metal machines. And those two are the... specify the kernel and the RAM disk for the deployment. Those are created in this phase. And this is the first boot. Since our bare-metal provisioning framework supports the exact same API as Nova Compute has, a user can use yukatsu or Nova Compute command to create a bare-metal instance. And once the instance request reaches the Nova Scheduler, then the Nova Scheduler chooses the best bare-metal machine which hosts the instance. And then the Nova Compute turns on the bare-metal machine through IPMI. And using PXC boot, the kernel and the RAM disk for the deployment are downloaded. And in the second step, using the kernel and the RAM disk, we can download the machine image via IceCurry and create a file system. And also, the RAM disk leads the network configuration from Nova Compute and configures the Mac and IP address of the bare-metal machine. And finally, we set up PXC for the second boot and just reboot the bare-metal machine. This is the second boot. In the second boot, you can use your own kernel and RAM disk, which are specified through a yukatsu command, Nova command. And here in the second boot, the kernel and the RAM disk are downloaded. And once those are downloaded to the bare-metal machine, then it's boot from the local disk. And the bare-metal machine becomes a bare-metal instance. That's it. So this is the provisioning, and this is about the network isolation. So in virtual machine environment, there is a hypervisor, and BLAN is created under hypervisor. But case for bare-metal environment, there is no hypervisor. So it means user can change Mac and IP address if he wants. And also, user can change BLAN tag as well. So we need to provide some prevent mechanism against malicious users, not to harm other users' network. So currently, this is a function we support. Currently, we use a quantum, especially NEC Torrema and Open Cross-H plugins. And by using this quantum API, we put two kind of filter rules, one for protect against address spoofing and another for create a private network among instances. And by using these functions, I think we can provide the exact same network isolation level as NOVA network provides. Okay, third one is about NOVA volume attachment. And this is also in bare-metal environment. There is no hypervisor. So if you do high-scrub discovery, then you can see all the NOVA volume and you can attach any volume you want. So we also need some mechanism to protect against this. So solution is quite similar to network isolation. So now, we are using OpenFloat switch. And using OpenFloat switch, we isolate the ice cache network. And also, we use a chap for the ACL. And BNC access. For the BNC access, we are simply using serial overlap and using serial overlap, you can access to the bare-metal nodes through a web console like this. So those are functions we've implemented. And from the horizon point of view, those are supported instructions. And currently, we don't support a suspend because if we do suspend for the servers, then there is many issues for the region. So still, we need to spend time for the investigation. And also, a snapshot is not supported. And for this, we are still discussing to find the best way to achieve this function. And for the console, we cannot support, we haven't supported a BNC yet, but we do support a serial overlap. So let's hand over the presentation to Mana, and she will show some demonstration. I will show you two demonstrations. One for our implemented function in bare-metal provisioning framework. And another for autoscaling Nova Compute using Xavix. I will show launcher bare-metal instance, network isolation, attach a volume, launcher bare-metal instance using horizon. We are selecting a bare-metal flavor. We'll begin. Bare-metal instance runs with deployment mode and gets MI, VI, SCSI to local hard disk, which BXE runs with instance mode, associate IP to the instance. I will show you network isolation. Each, there are two users. Each user has one bare-metal and one VM. And each network has already isolated using open-flow switch. User one broadcast can see packets only reach user one's bare-metal machine. User two broadcast also, you can see packets only reach user two's bare-metal machine. I will show you volume attachment. Through the bare-metal instance console, we do ISCAG discovery. Create a partition file system. Console through a web browser by serial overrun. In this browser, we can execute all commands. For example, we are explaining about scaling the Nova compute using Xavix. The bare-metal machine provisioning, one of the benefits of bare-metal machine provisioning is we can manage bare-metal machines same as virtual machines. Utilize all the ecosystem created also using this open stack by open stack. We change resources. I will explain how does Xavix scale Nova compute. We have integrated correct D to Xavix and we can get total VCPU associate assigned to VM and total physical core demonstration. We use those two items. These slides show details of triggers and actions shows total CPUs and green lines shows total VCPUs. And this is the list of Nova computes. Xavix display show total physical CPU is four. Launch a VM and one tiny flavor is used. So each VM has total VCPUs going up the green line, the core CPU. So the scale-out script is executed by Xavix. To the list of instances, launch one more VM from a compute that has no VM is shutted down. That's it. Thank you. Oh, no, no. That's it for our presentation. And still, there are a couple of slides. I think the second demonstration is very important. I know there are tons of tools that can support bare metal provisioning, but one of the advantages of Xavix is it has a fully compatibility with Nova Compute. So if Nova Compute supports not only auto-scaling, but also like failover, then you can use those functions for bare metal. So that is, I think, the most advantage of ours. And those are lists we've submitted. Unfortunately, not much yet, but those are code lists we've submitted. And also, we provide a detailed document on the weak side. And also, ISI's code and Docomo's code are merged, and you can download the latest code from the GitHub. And today we have many, many people giving us feedbacks or advices. And if you want to join the project, please come tomorrow's technical session held at 4.30. And it's time to start the collaboration. Any questions? Currently, we need to register manually for the database, and we need to connect Nova Compute and bare metal machines. And in the future, we plan to use a Chef for the discovery, and we do all the connection automatically using a Chef. The first pixie boot is just only used for deployment. And I think you are asking about pixie boot for the second boot. So if you are okay to use a deployment kernel for the boot, you don't need to use your own kernel on the list. Just do one pixie boot and change it, that's it. But if some user wants to use your own lamb disk and kernel, which has already supported in Nova, then we need this kind of second pixie boot. In the second pixie boot, we use users, kernel, and lamb disk. This is not lamb disk and kernel for the deployment. So we can replace the kernel and lamb disk completely from the first boot. For the first boot, we just use deployment kernel and lamb disk. I see, I see, I got it. Okay, okay, okay. The question is why we need a second boot to load this kernel and lamb disk? Because it is very difficult to set up a graph or some boot loader. That is the only reason. That's why this is more easy. It's very complicated to set up a graph script, grab something. So that's why we are just using a second pixie boot. In my version, I support today and not confirm, but I think all of the machine-made should work. All the machine-made should work. But not test it. We just test it like one to two machine-made that should work. Thank you very much. Thank you very much, guys. This is a great, very technical presentation, well-delivered with those demos. It's almost six o'clock. Merantys' party is very close at Roy's restaurant or diner or something like that. Roy's something.