 Welcome, everyone. This is Jonathan Gershater, and I'm here to give a lightning talk on the value of running containers on bare metal servers. So just to set context, a bare metal server simply has the Linux operating system installed, whereas a virtualization platform has the hypervisor layer, which shares CPU and RAM between the guests and the applications. So this talk is really summarized in this slide. On the right, you see a container environment without hypervisor and virtualization layer. On the left, you see with the either KVM or ESXI hypervisor, the difference being not only the hypervisor layer, but also the people, the virtualization team that is running the virtualization software, which is an extra complexity in terms of team people. Do they have the right skill set, etc.? So we're going to those details next. So since you have different operations teams, if you're operating in a virtualization environment, who would be the ones taking the call? Who would be answering support? Is the problem in virtualization? Is the problem somewhere else? So it's an extra complexity for trouble shooting. Bare metal offers increased performance, so you don't have that virtualization overhead. Therefore, you can get increased speed, you can access the next real-time features, etc., and increase better performance. The density surely increases because you have more containers on a bare metal service, since you don't have that hypervisor layer extra overhead. Sometimes, to reduce the noisy neighbor problem, one might run one container per VM, and therefore that density advantage is further reduced because your number of containers is limited to how many VMs you can operate. The cost is improved with bare metal when you are removing that hypervisor virtualization layer as well. And again, the higher density also increases the ROI of bare metal. Security is better since you don't have that ability to reduce attack surface and virtualization vulnerabilities can be removed. If applications are specific hardware needs such as GPUs or FPGAs, those can be accessed more easily when the operating system is accessing them directly without that virtualization layer. Again, the troubleshooting should be easier without the virtualization layer to have to troubleshoot as well. So let's look at some recommendations. It is in a kind of a hybrid mode where you run master Kubernetes nodes on the virtualization platform and then as we see next, run the worker or application nodes directly on the bare metal for high IO and again special use cases such as SROV, GPUs and FPGAs. Some recommended to size hosts in advance, especially when you're considering part density and provisioning of bare metal hosts may take longer. It's important to secure the operating system so one should not run non-critical services to patch it, run vulnerability scans. This is standard security practice. Be careful to use container-based images that are trusted and vetted. So when we pull any container-based image from the wild, you don't know, does it maybe have a virus? Is root access enabled? What packages are running on it? If you use a trusted container registry, then those images are pre-vetted and pre-secured. In closing with containers on bare metal, you have a lower TCO and an increased utilization density and performance. Thank you. Let's take Q&A. It's an end live. So Akash asks, are there any evolving standards for bare metal provisioning? So yes, there is an open source tool that does bare metal provisioning. It's called IRON, and I can provide a link to that in the chat. And Timothy Lin asks, aside from use cases involving large GPU FPGA, are there benefits having a high provider in there? So the high provider benefit is perhaps for the master or the controlling nodes, but certainly for the worker nodes, you want those on bare metal. So let me type in those two answers. Can you please comment on orchestration aspects for bare metal hosts? I'm not sure what orchestration aspect is, but yes, the orchestration is conducted by Ansible or similar tools to orchestrate the bare metal. Andrew, will you decrease the number of node port services for running on bare metal versus VM? Yes. Chiragath, I think we can also pass SRRV GPU direct to VM over the virtualization layer. Yes, that's true. Akash asks, in terms of hardware, how much cores RAM can you save adopting container versus virtual machine? So it would depend to the extent on obviously on the hardware, the use case. It's hard to give a raw number without knowing the use, you know, the hardware you have, the use cases of the applications. But I've seen around a 3 to 1 ratio, but it's very dependent on what the original size of VM was. Thomas Linden asks, are there reference architectures available for review? Yes, there are. I don't have them immediately available, but I will have a look and try and send you. Kravak isolation becomes a bigger concern in bare metal is community thinking of how to address this. I would also ask it any more, provide a bit more detail on traffic isolation. Are you thinking of network slicing? Is it network prioritization? How is that, maybe just add to the chat, how is that done in virtualization that you want it done in bare metal? The side is asking in Q&A, why container is more secure while it should be vice versa? Vice versa compared to VM, I'm not sure what your exact question is. There is a Slack channel where we can continue this conversation because I think the session is wrapping up. So let me put that in the Q&A session is wrapping up now. Let's continue in the Slack channel, please. Thank you for your participation and questions.