 Hi, welcome to this session. I'm Shen Yang from Ranchelabs, now a part of SUSE. Today in this session, we'll talk about a new project we have started recently. It's a hyperconverged infrastructure software, but also is a little bit more than that. A few years back, the concept of a hyperconverged infrastructure was proposed by combining the server and the storage into a distributed infrastructure platform. The Azure CI software reduced the burden of IT of the means to manage the server and the storage separately. Even with the commodity hardware, the software layer is intelligent enough to figure out where to put the data to keep everything highly available. No special storage or server will be needed. It sounds like a pet with a kettle. By switching to the HCI, you no longer need to treat your storage as a pet and no worry about if you're operating the right or not. And also, just in case you know, pet with a kettle is also where the name of the rancher comes from. I mean, the rancher part, of course. So in the data center, Azure CI software is thriving. But from a software-operated perspective, there's still more left to be designed. You cannot run your application directly and on the HCI since the software is more likely to be designed on top of the VMs. It's because the software hasn't been programmed to know how to spin up VMs or connect to the storage provided by different HCI solutions. And that is because of each of these HCI solutions because different API language, which is a part of their preparatory API. The result is IP admins still need to operate software differently based on what HCI or virtualization solution they have chosen. Terraform and other technology are created to help. But still, the IT admins need to learn a lot about how to operate the software and writing a lot of customized script to maintain the property. That is not the easy work. Fast forward to now, we see Kubernetes taking the world by storm. We no longer need to treat the application as paths since container and Kubernetes become the de facto application API. As long as the application works on Kubernetes, they can be treated as a cattle. And no one needs to worry about what hardware is running on top of or what storage is using or how to spin up multiple VMs to run it since all of this has been taken care of by the Kubernetes. We want to introduce the concept HCI 2.0. By 2.0, we mean we're not only going to integrate servers and the storage, but also going to integrate applications on top of it through the Kubernetes API. We will replace the proprietary API with the open industry standard Kubernetes. So let me introduce the project Havaster. Havaster is the open source hyper-converged infrastructure software built on top of Kubernetes. Besides Kubernetes, it's also built on top of multiple cloud-native technologies, including Kupvert, Longhorn, KVM, MOTUS. It's been designed to work on environmental, providing users with a single unified API to deploy the VM and applications. Just as with the traditional HCI software, you don't need to worry about virtualization or storage. Just use the ISO image or PXE to install Havaster on your environmental nodes. Then you have a cluster which can spin up VMs, talking Kubernetes APIs, or even spin up a Kubernetes cluster on top of VMs. Now, as long as your applications are speaking Kubernetes API, they can be deployed everywhere, either it's cloud, the data center, or on the edge. Havaster makes Kubernetes ubiquitous. Bridge the gap between the traditional HCI software and the modern cloud-native ecosystem. And today, we're announcing the first beta released of Havaster. It's Havaster 0.2, and currently, it supports the following features. You can install it using ISO, your PXE, and it supports VM life-circle management. And they provide a distributed block storage. We also provide the flexible network options, including multiple NIC support and the VLAN networking. There is also VM image management support and the VM line migration. And finally, you are also able to spin up managed Kubernetes clusters with the help of Front Rancher. So here is what architecture looks like in Havaster today. Havaster will install K3 OS on the nodes. Well, the OS might be changed in the future, and that will keep you posted. K3 OS has bundled with K3S, so we can use it to create Kubernetes cluster on the bare mental nodes. Then the longhorn and the covert will be installed once the Kubernetes cluster is up. Now the user can start running VMs or spin up Kubernetes cluster on top of those VMs. As you can see, there are multiple networks available to the VMs. The management network is implemented by the Kubernetes overlay networking. Other VLAN networking are implemented by the Havaster VLAN CNI plugin. VMs can choose to attach to multiple networks, and that is thanks to the motors. Well, enough talking. Now let me show a quick demo of Havaster. All right, let's dive into some demo. So once you install the Havaster, you should be able to see the IP address you can use to look in the Havaster UI, which is going to look at like 172.16, 19.41, and colon 3443. So put that in your web browser and you're able to get to the Havaster UI. So here, in fact, for the first time, if you log into Havaster, we ask you to reset the new password. Here, we already done that, so let me log in. All right, so now you are looking at Havaster dashboard. So this is the Havaster 02 RC1, and which is not a stable release yet. So when the time this video was published, you might see a newer version than what I have right now. So we have total of three notes and two images, and you can also see the CPU and the memory and the storage metrics down here as well. And then you can also see like recent events, which is happening to this cluster a few minutes ago. Let's go to the next page. So on host, you can see the detail of those three notes and you can see each of them has what's operating system they add, what's the IP and all the other configurations of stuff. So on the version machine, you can create version machine and the volumes will be the storage used by the virtual machines. On the image page, you're able to import new virtual machine images for your use. And here, I already have like two images imported. One is Ubuntu, another one is OpenSusa. So let's go create some virtual machines first. Create, and I'm going to call this leap 01. I'm going to give it full CPU and 40 gig of memory. I'm going to select leap image. And the SH key, I don't have any right now. So let's skip that. On the volume page, you can see that we are going to create a 10 GB disk automatically for the OpenSusa setup for this VM, but you can also add more volumes. On the network, you can see there's already a default network set it up, which is in fact, the Kubernetes overlay networking, if you know a bit more about Kubernetes. You can also add another network, which is going to be one of the VLAN networking you have configured. So we are looking, we will take a look at the VLAN network configuration later. So that's just add this for the VLAN 81 for now. In the advanced options, you have the choices to re-overwrite the host name and select machine type, and also the due to cloud configuration using the metadata and network data section. So because the image we have specified is a cloud image, so we need to reset the password so we can log in. Also, since we have configured two network interface, we can configure them to be DHCP. So they were going to automatically acquire IP address from DHCP server after it's done up. So that's all we need. Let's click create. Now you can see the VM is in creation and we can take a look. All the basic information is now showing up and including volumes, network, SS keys, cloud config we have before, and you can see that in fact in the event, you can see those are coming from the Kubernetes network, Kubernetes events. So in fact, we are going to do more about this part later to make it more easy for the non-Kubernetes orchestrator at the main to work with. All right, let's go back to the previous page and now we can see that the Live Zero One VM is already running, click console. You should be able to access the VNC console provided by the setup. Well, it turns out I need to, I don't have enough space here. Now it's good. All right, so seems everything is working. Let's log in. OpenSUSA is the username for the OpenSUSA lib and also the password will be the password we have said before. So now you can also see there are two nicks, it's zero and it's one. And the name of the link depends on what operating system you are using but both of them already get the address. And you can also notice that it's one is getting address from 172.1691 which is, in fact, I will be land 91 in that range. We can also write some simple tasks here. So let's create this file demo and let's sync it so we can test our demo backup restore feature later. So that's all for this VM, let's close it. So we can create a backup for this VM and let's name it first. Okay, so in fact, a backup is not created yet. This is just saying that the backup has been initiated and we are going to iron out those small details more, polish them before the final release. And here we have a backup in progress and it should be done pretty soon. And this backup mechanism is using the longhorn backup mechanism and to ship your VMs backup to the remote S3 or NFS server. And to do that, you need to config the backup target in the setting area. Let's take a look at the backup and they should be finished anytime. And before that, we can go through the other features we have. The first one in the advanced menu is the VM template. For users, you can create specific template for certain type of images. For example, here we have listed if you want to have ISO image, which means you are going to install ISO to the root disk. You'd better choose ISO based image because they're going to automatically load your image as an ISO rather than loaded on the root disk. And the raw image base is the default one we are using. And the windows ISO image based was going to both load the ISO and also a certain windows virtual driver container, driver VM volume in order for the windows to achieve to make the virtual driver function. So yeah, now the backup is here, but we'll come back to that later. On the network section, you can see that we have already created two network here with different VLAN ID. And to create that in fact is very easy. And you can, I know that we have configured a few VLANs in our switch. So I can put the VLAN 33 there as well. And this programming, we are going to populate it on all the nodes and create the corresponding VLAN for the network. As such, the keys don't have anything here. And the only user our beta allows now is going to be the admin user. We are going to support multi-tendance in the later releases. All right, so let's go back to the backup. Now we see that's lip 01 backup is already ready. So we can try to see, oh, does this really record our data? Let's try it. Yeah, I'm going to create a new virtual machine named with lip 01 restore. And let's see how that goes. And in the virtual machine, then you can see the restore is happening and it will take some time. So let's wait a bit. Now we see the lip 01 restore is already running. Let's access the console. Yeah, it's booting up. In fact, one point of this restore is all the information, all the configuration from original VM will be carried over to this restored VM as well. So you can see that when this VM was going to get created, we're going to have like two nicks as we configured before. And also both of them will be configured with DHCP address. So let's try logging. By the way, password will be the same too, because the same cloud config we have stored. Okay, so both each are zero and this is one has got the IP address. And if you take a look and this demo contents has been written as well. So this is indeed a restore from our previous VM. So that's a good mix of the works. All right, so let's move on to the next demo. All right, so another thing you can do with Harvester is in fact using Harvester to create Kubernetes cluster directly. So that part of experience is still in polishing, but we can show you some early results we have now. So in the settings, you need to enable Rancher. And then you can see on the right top right, you have a Rancher icon here and click that. We are going to lead you to our building Rancher. So for this Rancher installation, we have also bundled with the Harvester node drive as well. So you can just go to the add cluster and then click Harvester. And you can just create cluster from here. So let's call it zero guest cluster and just set the name here as one. So three node template already being chosen and this is created before. SCD control plane and everything looks good. And let's try to create it. All right, so now Rancher is talking to the Harvester to create this cluster. And if you take a look back on the virtual machines, momentarily you should able to see three nodes has been created and they're trying to start and serve for the VMs for the new Rancher managed, Rancher spin up Kubernetes cluster. So this is going to take a while. I will skip forward. Now the cluster is ready and you can take a look at the node page and the listing three nodes, which does the same as we see from the Harvester side. And now you can just work on this cluster and adding deploy applications, adding workloads, do whatever you want with this Kubernetes cluster. All right, so that concludes our demo. All right, so this is the roadmap of Harvester. We're planning to have Harvester 0.3 released in the Q3 this year and the GA release data this year. There are still tons of work we need to do between now and GA as you can see from the list here. But I encourage you to download the Harvester beta release and give it a try. We'd love to have your feedback. If you have any questions regarding Harvester, feel free to reach out to the team. You can find the latest ISO in the GitHub Harvester slash Harvester or our website at harvesterhci.io. Also feel free to join the Rancher user select channel Harvester and we are looking forward to your feedback as always and thank you.