 Welch chi, gweithio'r gweithio ar y cyfnod o'r micro-vm ac yn ymwysgol i'r cyfnod. Ymwysgol Richard, ymwysgol ymwysgol ymwysgol. Rwy'n credu ymwysig ar y cyfnod, ac mae'r cyfnod ymwysgol wedi'i gwneud ymwysgol ymwysgol yn ymwysgol ymwysgol, ac mae'n ddysgu'r cyfnod yn cyfnod ymwysgol. Felly, mae'n gweithio'r cyfnod? Felly mae'n gweithio gan ymwysgol ymwysgol ymwysgol yn ymwysgol. Mae'n gweithio gyda'r cyfnod. Yn ymwy yw'n go iawn o ymwysgol ymwysgol, ond mynd i'n gweithio.assaid dim bywysau eich substitute ymwysgol, a dwi'n gweithio'n gallu cyflym o'r cyfnod ymwysgol ymwysgol. Felly mae'n gweithio'r cyfyngau arwyfaint o'r cyfnod ymwysgol The we were going to have a demo if we had time but now I'm not on my laptop. We won't be able to have a demo but if anyone is interested, grab me afterwards. Lightning talk. I'm showing my age here but we're going to be a bit like Rode Runner and I'm going to talk quickly, so apologies in advance. A quick show of hands. Who is heard of Micro VMs? Nice. Cool. Has anyone used Micro VMs? One, good. Yn yng Ngweithio, ychydig? Yn cyffredd fawr, yw'r cyffredd. Felly mae'n gweithio yn ymdweithio. Mae'n gweithio'n Gweithio'n Micro-VM. Mae'n gwybod i'n gwybod i'r word Micro-VM sy'n gwybod i'r hynny i'r cyflosio. Yn y ffas yng Ngwch, y micro-part, mae'n gweithio'n gweithio ymdweithio'n gweithio'n gweithio'n gweithio'n gwybod. ac mae'n rhoi'r bwysig yna ydyw'r amser o'r bwysig yna, a mae hynny'n arnynt eich hynny yn fawr. Dyna'r adrodd yn detrwyddiadau. Felly mae'n gweithio unig o'r enghreifftu. mae'n rhan o'r modrfaen. Mae'n meddwl iawn i'r bobl o unig, dyna'r ceisio sy'n ffawr ymlaen. Dyna mae'n rhaid i'r bobl, mae'r unig o'r popeth yn ein popeth, mae'n rhaid i'r bobl, mae'n rhaid i'r bobl o'r machine, cymryd, cyfwyl gyda'u cyfwyl cymryd. Y clywed ychydig wedi bod gyda astud werthu hwn o wneud ein peth ymylwg rai. Mae'r ystodol yn cymryd a oedd y ystodol yma yn cyfwynydd gychymryd yn y cymryd ymgylch gyfwyl cymryd cymryd. Mae'r ystodol yn cael ei fudd, a fynd i'ch gweithiau yn chweithwyr y rhaid, ac mae'r rhaid ychydig mae'r h recount y gall gwirio ddyn nhw yn cyfwymydd. Mae'r rhaid o'r wahanol Helfrally on Twitter, when I posted something about microvm, whilst I don't agree with both of the comments there, I actually agree with the sentiment of the first comment. If I'm running a Kubernetes node in a microvm, is it actually micro? Probably not. But I must prefer someone else's comment that like and liquid metal to the T1000, so that was a lot better for me. neu nid yn gallu bod eich Dyn nhw, yn bod o'r vocodiwr Cymru. Felly mae'n mynd i ddim bod wedi bod yn ymlaes. I os ydych chi'n mynd i dda i ni ddechrau'r vocodiwr Cymru yn ddechrau'r vocodiwr Cymru, y bywyd yn ddim yn yr unrhyw. Dyddynau rhagwpheiddiogelion yn dda i'r vocodiwr Cymru. Rwy'n mynd i gwneud nefan am vocodiwr Cymryd, ond o'r ysgolion three ysgolion. Yn y cyfnod, dwi'n gwybod o'r ysgolion ysgolion a ddodol yn bwrdd mwyaf. Bydd hynny'n ei wneud o'r cyfnod, bydd hynny'n gwybod i'r gwybod i'r gyfnod ymgyrch, i'n dechrau'r cyfnod, ac gan gydag. Yn y cwestiynau, mae'n gwybod i'r cyfnod ymgrifwyl. Yn yw'r cyfnod o'r cyfnod o'r cyfnod, mae'n golygu allwch arni Nwmbers. You know, this one here that is quoted about firecracker less than 125 milliseconds to launch, you know that sounds really, really good. That's to launch the firecracker process that is not to boot your guest OS. That is on top of this. So take those numbers with the pinch of salt but they are designed i'w ddweud o ydych chi'n mych o'r bwysig yma, o'r host i'r ddweud. Ac mae'r ddweud o'r ddweud. Y ddweud yma yn perioedd yma o'r effeithiwn i'r ddweud. Yn ymgyrch chi'n ffordd i'r ddweud yma, y ddweud o'r ddweud o'r bwysig. Mae'r ddweud yn mynd i'r ddweud. A'r ddweud o'r ddweud o'r ddweud o'r ddweud, mae'n ddweud o'r 5 megbylch o'r bwysig. Mae'n angen ychydig i remerwysgawr. Yn lleidio'r ddedigolfawr, mae'n ddweud o'r ddweud o'r ddweud. A'r ddweud. llanau hwn y cas o'r ddweud. Yn llanau'r ddweud o'r ddweud. Mae'n ddweud o'r ddweud. Mae nhw'n dwy同 am ddweud o'r ddweud, mae'n ddweud o'r ddweud o'r ddweud. Mae'n ddweud o'r ddweud o'r ddweud. the guests are aware that they are running within a virtual environment, essentially, you know, depending on your implementation, there may be varying levels of the OIO device-types implemented there. But what this all means, is there is less bloat, less hypervisor bloat, is how it is commonly termed, it has a faster start-up time, With the less devices supported less code in there and less attack surface all those 미�cromVM implementation have a concept of volumes you have to have a root volume remove them the all implemented face at RAW file system files or block devices can values really really cum plus to work with if you've ever had to use those Dwi'n meddwl ystod oherwydd oed yn eich myfyrdd syniadol wedi gweldu cyfath. Mae yma'r hynny'n gwneud o'r cyfathu ymwysbeth sydd wedi'i gwybod rhai yn cyfathu, a wedi bod yma'r hyn yn iawn i gyd yn ei ddweud, i'n dweud yr hynny'n gwneud o'r cyfathu ymwysbeth sydd ymwysbeth ymwysbeth, ond mae'n gwneud o'r cyfathu. Now also depending on your implementation there may be no bios and no boot loader. The micro vim implementation such as Firecracker will start executing the kernel directly, it will go to a magic entry point, and then it will start executing from there so that also increases the start up time or improves the start up time. Most implementations come with an API server built into them, and now this is a per instance API server, and it's usually accessible via socket. Now what this allows you to do is perform configuration against your micro vm, so add a network interface or a volume. It also allows you to perform operations. Now the operations that you can perform depend on the implementation you're using, so it could be start, stop, your vm or take a snapshot. Also depending on the implementation they might have a metadata service built in, now this is super useful. So this allows you to place information in from the host into the metadata service and have that information available in the guest. Now this is commonly used for things like cloud init, so you can put your cloud init from the host and get it to boot, or use it during the boot process of that micro vm. Lot of people also use it to pass in things like secrets, so if depending on your use case you might need a secret within that guest vm, so you can use the metadata service to do that. Now moving on to why should you care about micro vms, so I guess fundamentally it allows you to do more with the same amount of tin. So if we have a bare metal machine here, and I want to run a vm on there, so there's two parts to it. So the resource is required for the guest, and then there's overhead per vm. Now if I'm using a micro vm implementation, the resource requirements for the guest are the same, but the overhead is less. So that basically means I can run more on that machine. And then if you think about edge and far edge where the amount of compute is limited, this really makes a difference because it allows you to run more things on that machine. So some of the use cases, the big one that is always talked about is workload isolation. So this is essentially because you're running within a virtual machine, it's more isolated than a container. And now you might want to do that for regulatory, operational or data privacy issues. So I spoke to someone that was developing a data pipeline as a service solution, and they wanted to allow customers to run their own custom steps. But they didn't want to expose the rest of the pipeline to that code that came from the customer, so they decided to run that individual step within a micro vm and isolate it from the rest of the process. This is close to my heart, so this is what we did. We essentially run Kubernetes clusters or the nodes within micro vms, and that allows us to essentially have lots of smaller clusters and potentially give one to every customer or one to every team instead of having a smaller number of large clusters. And there are lots of other examples out there in the wild. Some of the most interesting ones are using micro vms to run isolated build pipelines. There's a really good video of someone called Alex Ellis, who's using it to run GitHub runners locally, compile kernels and stuff like that, that take forever in normal GitHub actions runners. You can use it to create, because of the speed and the resource requirements, it's a really good solution to creating testing environments and preview environments on pull requests. So you can just spin up these environments very lightweight. So how do you use micro vms? So essentially there's two main implementations that I think about as micro vms. There's Firecracker and there is Cloud Hypervisor. They both have a similar heritage. They've both started out for cross VM. They've both used the Rust VMM crate. But depending on their use cases, they have now diverged slightly. There are other solutions now. So there is the CUMU Micro, but this disappeared after Firecracker and Cloud Hypervisor. Both support x86 and ARM. And essentially what you have to do is create an instance of Firecracker or Cloud Hypervisor per VM. So essentially a process per VM. So if I move on to Firecracker specifically. So this was used, well this is used and developed by AWS. It underpins AWS services. So specifically Lambda and Fargate. Now what this also means is it's designed for a very specific use case in mind, which is a femoral compute and that drives the features that are within Firecracker. And this really translates into a reduced device model. So there is no PCI pass through, for example. There is no Mac VTAP support because it's not required to run in AWS. So the features are driven essentially by that. It does have a metadata service. So I can use that to do cloud in it if I want to on boot. But because it's a femoral nature, it has no concept of pause or reboot. So it's just a start or a stop. So you need to be aware of that if you're going to use Firecracker. You can increase the security of the Firecracker process. You can start Firecracker via something that they call the jailer. And that essentially then forces the Firecracker process to be started within a network name space using second filters as well for the system calls and C-groups to limit the resource usage. So you can really, really lock the Firecracker process down. One thing to note is you can only use it for Linux guests. Secondly, there is Cloud Hypervisor. Now this is a project started by Intel, Alibaba and a few other companies. And it's a more generalized virtualization solution. And it's a bit more feature-rich as a result of this. The device model it supports is greater than Firecracker. So it has a lot more virtio device type supported, specifically things like VDPA if you're interested in that. It supports PCI pass-through. So if you have machine learning workloads that you want to run in your micro VMs, great. Also if you use an SRIOV as well, this is an option for you, which Firecracker is not. If you're interested in secure compute and enclaves, it also supports TDX and SGX as well, which Firecracker doesn't. And it supports Mac VTAP out-the-box. So I should caveat that Firecracker does have a feature branch open with Mac VTAP support as well. There is no metadata service in Cloud Hypervisor. So if you want to get information in and out of your guest VM, you're going to have to use something else. So you can attach a volume if you want to do that. Whether it's a good thing or a bad thing, it supports Linux and Windows as guest operating systems. So they would be a demo if I was on my laptop now, but that's probably about 10 minutes. So thank you if you have any questions. Let me know. Okay, did we have a question? If anyone has questions, raise your hand. All right, perfect. Are there any runtimes which are using micro VMs? Any runtimes that are in... Continue runtimes, like if we can... Yes, yes. So there is something called Firecracker Container Dean. So you can start up containers within an individual instance. There are other companies that are doing other things with it, but not specifically container runtimes. I haven't seen that working with Kubernetes though. I've just seen that individually. Okay, I think we've heard time.