 Cool. Are you all ready to get bored with a very boring presentation? By the way, feel free to leave comments or questions. I have like four monitors going, so I'm monitoring everything while giving this presentation because it's kind of boring. So I have plenty of time to read everything else that's going on at the same time. I am Vince. I have the website darkcan.com where I've kind of cataloged all the things that I've worked on over the past 20-something years in tech. And lately, my favorite hobby has been getting more and more free BSD virtualization stuff going, especially around VMware and Parallels desktop for the ARM platforms. But before we get into the boring stuff, let's get into the exciting stuff first. I wanna invite all of you out to the free BSD Discord server. So we have about 1,400, a little over 1,400 people in it right now. So we chat every day about this, that, everything else development, using it, just ideas goofing off. You name it a little bit of everything, like ZFS discussion stuff like that. A lot of my virtualization stuff I share there, just the ARM embedded stuff in general. I share a lot of that discussion on there. And we do live sessions all the time, kind of unscheduled. So basically like this conference, but completely abhoc unscheduled, oftentimes nights and weekends, somebody was like, yeah, I'm gonna stream what I'm working on. And I'll show a little bit of some of stuff I've been working on that, was actually done over streaming on it. So go ahead and give us a join. Yes, I already know your number one complaint, which is we do not have any client for free BSD yet for Discord. But it's kind of that chicken and egg problem, where it's, do they build a client for the users, or do we have the users to tell them to build a client? And so right now we're kind of going in the idea that if we build a user and community about around a free BSD Discord, that we can help push them to give us a native client since it's just an electronic app. But now onto the boring stuff. And this is why ARM virtualization has gotten really boring throughout 2021. And it was August of 2020, VMware released their ESXi ARM fling. Their flings are basically just product, they're not official products. They're tests, demos, just ideas, concepts that they release out to the public just for people to try and see if they like it or not. Some of them are like web interface add-ons, drivers, different things like that. But they released one really cool awesome major one, which is their entire ESXi hypervisor, but for ARM. And going into it on day one, I just assumed a lot of things would be broken because it is just an early stage test that it'll run, maybe you'll get like a basic VM up and running and you don't have all that much else going for it. But here we can see from this first screenshot on the left, this is in my home lab, Dell R720 with, it's a dual socket, a dual eight core system with a bunch of RAM, bunch of storage and a whole bunch of VMs running on it and a bunch of virtual networks all in different VLANs. And then if you look at the right, I don't have as much built out on it, but it's 16 core, the solid run honeycomb board. But this just as easily runs on Raspberry Pi is I have probably five, between five and 10 Raspberry Pis that are all running this exact same hypervisor right now. It'll run on any Raspberry Pi for the four gig or eight gig model. And recently they got it working also on the Pi 400. So if you want, for some reason, a hypervisor inside of a keyboard, that is now possible. But with this, the process of creating and managing and working with your virtual machines is almost a hundred percent identical between x86 and ARM. And that's how it is today. That wasn't at launch, there was a lot of issues we had to work through. Like specifically with FreeBSD, SMP did not work at launch day. There was some bugs in both the FreeBSD kernel as well as the hypervisor around the interrupt controller, the GSE, the generic interrupt controller that prevented the secondary cores from basically being woken up properly. Or you can go check the bugs to see the exacts behind it, but that's kind of the gist of it. And we worked with the VMware developers and they helped sponsor patches for FreeBSD and they fixed some stuff in the hypervisor as well. So now today you can go ahead and install it and like on this 16 core box, I can just start up a 16 core VM and it'll just work right off the bat. There's no hacking or anything else that has to go into that. Additionally, when the hypervisor first came out, the ISO images, I'll get to that in just a second, I have an actual slide on the ISO images. Going back to the management real quick. So storage is important for hypervisors as well. With this, I have already on my x86 infrastructure a dedicated iSCSI storage that I use. It's a configured as a two terabyte volume that I can just dump virtual machines into. And for ARM, like I was saying, I was expecting a lot of things to not work on day one and surprisingly, software iSCSI worked flawlessly on ARM just as well as it does on x86. And if you notice here, the actual UUID of the volume, the iSCSI target is exactly the same. So I'm actually sharing my existing x86 iSCSI target with my ARM infrastructure. So I didn't even have to provision any new storage or anything like that. I just plugged into the exact same infrastructure that I was already running and I instantly had storage to dump my virtual machines on. And then having multiple hypervisors, like I said, multiple Raspberry Pies, they're all pointing to the same iSCSI store. So I can actually migrate my VMs between the Raspberry Pies without any downtime. Additionally, yeah, so this right here is an example of actually browsing the data store. So this is my iSCSI data store. And early on, like I was saying, we did not have the ISO image for FreeBSD on ARM, the generic kernel did not include the CD-ROM driver. So kind of a little ironic thing that we ran into is that the CD image did not have the CD driver. So when booting from UEFI, which is what ARM uses, it goes through its basic boot process, it hands it over to the boot loader and the boot loader hands it off to the kernel. Once it handed off to the kernel, reading from the ISO image, or in that case, the virtual CD-ROM drive, shifted from UEFI over to the kernel and the kernel didn't know how to read from the disk. And so I went through this convoluted process where I took, there was an installer initially for a raw hard drive image instead. We had one of those for ARM. I converted that to a VMDK and then had that as a secondary hard drive, or a virtual hard drive attached to a virtual machine. And that's what this installer image is here, me going through that conversion process. And just showing that, again, it's the same storage architecture, it's using the same storage infrastructure between my x86 and ARM infrastructure. So I can actually do all of the disk image processing on the faster x86 machine to convert the disk images from a raw VMDK and then load them up on the ARM effortlessly. Luckily today, we don't need to worry about that. 13 includes virtually every driver that we need at this point and I'll get into a little bit more about the drivers. And as you can see here, we also have NFS is working great on it as well. Another surprise that again, things are just working straight out of the box, which is again, why it's very boring. On the left, you can see there's the AMD64 image and on the right, there's the ARM64 image. And today, if you want to create a virtual machine, you can just load up the disk one image in ESXi ARM fling and it'll just run through the normal install process like exactly like you'd expect. And I thought about making this presentation even more boring by actually going through step-by-step of the installer, but I decided to scrap that just to make this a little bit easier. And you guys, so you don't get bored with just looking at a previous installer for the entire presentation. I decided to change it up and show you guys a little bit more of the infrastructure side. Speaking of infrastructure though, one thing that VMware has going for is that they have vCenter, which is basically a centralized management system to manage multiple vSphere or ESXi systems all from a single management console. And it allows you to migrate virtual machines live without taking them offline from one machine to another. And even that, even vCenter perfectly integrated with the ARM version of ESXi. So as you can see on the left there, I have my one single vCenter instance and it's connecting to multiple physical locations that are running ARM infrastructure. And I also have multiple physical locations running x86 infrastructure. And everything just all shows up in a single dashboard with a single management interface. And I have all of my performance stats, all of my configurations and everything all in just one nice central place regardless of what architecture that I'm running. And so that makes it again, easy and boring. So if you already know the tools, there's virtually no difference between jumping from one to the other and having them in the same environment. Obviously because the architectures are different though, you're not gonna be migrating a virtual machine from an x86 box to an ARM box or vice versa because they're completely different architectures. And even on that note with the ARM architecture, there's a lot more nuances in the processors than there are with x86. So even though the solid run and the Raspberry Pies, I believe are both A72s. Somebody can quote me on that, I'm probably wrong, it might be a different number but I believe they're both A72s. And even though they're the same like processor generation within the ARM system, there are subtle differences between the two where you can not migrate between those two platforms but between two different Raspberry Pies, it works. Which is nice because if you wanna like live, migrate a virtual machine and then take a pie offline for whatever reason, like if you wanna upgrade the hypervisor, that's something you can do right now which is also really cool. Here, again, we see the actual virtual machine up and running. We have just a small screenshot from each of them running a BSD info. But on the left again, we have our x86 infrastructure on the right, we have our ARM infrastructure. One of the cool things here is that starting in, I think it was October of last year, I started working on porting open VM tools which is the guest tools for VMware. And that allows the guest and the host to communicate back and forth bidirectionally. And part of that communication is that you can see the host name there, the IP address, the version of VMware tools running and a lot of storage information is passed from the guest to the host. Again, tying into all of that management infrastructure both on ESXi and through the vCenter client as well. So we can see the zero volume at, just on the root of the file system and then all of the other, the default ZFS data sets that are created through the free BSD installer. And as of today, I believe open VM tools as of 11.3, it is fully compilable on ARM, but I don't think we have the package and the package repository yet, but it is updated in ports and it takes like a minute at most to compile and install. It's a very small package. So even if it's not in the pre-compiled packages, manually compiling it should work flawlessly at this point on ARM. So that's a really good effort by everybody both inside of VMware and in the free BSD community to get both sides of that together and all working nicely. So talking about the virtual hardware, this is where things start to diverge quite a bit. I know that this text is gonna be a little bit small just because there's a lot to show all at once. Again, we have the x86 infrastructure on the left and the ARM infrastructure on the right. And on the right, you'll see like one big giant red box. That is what's called a VMCI or the virtual machine communication interface. And that's one of two drivers right now that we do not have ARM drivers for yet. So if you're looking for a project to pick up, either one of those would be great. VMCI will probably be easier than the other one. The other one is the actual video card driver. And the reason for that is because I believe the VMCI interface is mostly the same between x86 and ARM. The only reason it isn't directly compilable is because there is a bit of machine code, some assembly in the driver and so that needs to be converted over. The issue with the SVGA driver, the video card driver, is that they've completely redid the virtual hardware interface from scratch. So it's not using whatever their IO command structure that they were using before. They've swapped it out for something more modern. And I believe they've added more modern graphical rendering features as well. But there is a whole host of other hardware as well. So while other hardware does work, you have to manual add it to your loader.com file. But right now we've gotten that down to I believe only two items left that don't immediately auto detect on boot. And that is the UMS, the mouse driver and the SND HDA. So the sound, high definition audio driver. And right now the audio driver doesn't apply to ESXi at all since there is no sound on that interface. It only applies to if you're using it on a desktop like if you're using VMware Fusion, for example. Or Parallels, Parallels desktop. Either one of those would need the sound driver if you don't have audio with it. Other drivers though, historically we did not initially have the PVSkezie or the VMXNet. So the Parallels virtual skezie driver or the Paravirtual network driver at launch. But those drivers had no machine code and amount of all surprisingly. So it was just a matter of just ticking the box and the kernel to enable them. And now we ship them with generic so everybody can benefit from them, which is really nice. So again at this point, as far as those are concerned they have now hit the boring category where you just install FreeBSD and it works out of the box just as expected. And we've worked upstream with the VMware Devs because despite having the PVSkezie driver working in FreeBSD it actually didn't work right in the hypervisor initially. So reading and writing worked great but their virtual UFI BIOS would not boot from it. So if you had it as a secondary drive like just a generic storage drive, it worked great but as a boot drive or the OS drive it would just fail to boot. They've since fixed that which is really nice. So again, putting it back to the boring category early on what I was doing is I was still using the virtual SATA interface for my boot drive and then I would use PVSkezie for my larger storage drive for a little bit more performance with my MariaDB databases that I run on ARM in my home lab for testing. Funny enough while working on that last running the hardware info and pulling that that particular website actually has a check for virtual machines. And if it's a virtual machine it'll actually remove the information after a few days because they don't wanna keep a lot of VM information around. They wanna kind of have the website focused on real physical hardware. And it turns out that it wasn't detecting ARM virtual machines at all and I went and did some digging and it's actually because in the FreeBSD source code itself the ARM code was never built with the idea of virtualization in mind since it's so new to the industry right now. And so kern.vm underscore guest shows as none on ARM even if you're running inside of a virtual machine and the identcpu.c file for x86 has a bunch of detection code for a lot of different hypervisor platforms and all of that code is currently missing on ARM. So if you're looking for a project to pick up that would be a very great good place to start if you're familiar with kernel programming. It shouldn't be too terribly complex just looking at how it's done on x86 and applying it to ARM because a lot of the detection processes should be nearly virtually identical between the two. And again, like I said, it just didn't exist before because we didn't have these hypervisor stuff until the past year. So it wasn't in anybody's mindset or it wasn't nobody had the ability to test it anyway. So it made sense not to have the code until it was actually usable. So what are we actually using this for? Me personally, I'm using it a lot for testing out different versions of applications all at once. So I use zero tier quite a bit for like SD-WAN networking and I run a lot of regression tests. So you'll see here different versions of FreeBSD listed and different commit hashes of the upstream project are also being tested. So when things break, I can go and have like 10 virtual machines ready to go with 10 different versions of the program and 10 different operating systems all on one physical piece of hardware. And that's allowed us to fix bugs and push ports, get bug fixes for ports a lot quicker going out. Speaking of running different software on this, I decided to install a full XFCE desktop just to show that even though we don't have the SVGA driver, the SCFB or the console frame buffer is supported and will run a full desktop. And at that point you're just doing software rendering but you still get whatever the performances of your CPU and it works pretty well. And here I have Firefox running on both of them. You may have noticed so that the two Firefox instances don't quite look the same and I'm not talking just about the web pages. The actual layouts are a little bit different. On the left on x86 we're running the current version of Firefox 92, but on the right there is Firefox 78 ESR. And the reason for that is because the upstream Rust port likes to break a lot on ARM because they don't treat ARM as a, or especially FreeBSD ARM as part of their regression testing suite. And so it relies on people from the FreeBSD community to find, discover and report the issues when Rust breaks and when that happens every single application that relies on Rust stops working. And one of those is Firefox and there's a few other applications that I use that are also Rust but Firefox is the most notable because it's one of the main open source browsers and without a browser you don't really have a modern desktop. So another great way for anybody who wants to contribute is just to monitor flaky ports like that and keep an eye, test them as new versions are coming out and letting them know when regressions happen because right now it's a very manual process and the more people we have testing that the more chance we have of it getting fixed quickly. I do believe as of a week ago when I last tested this manually compiling Rust and Firefox will work with the current version of Rust and Firefox 92 it's just not a prebuilt package in the current package repository because on ARM those are lagging behind a bit they're not getting recompiled as often but that process takes hours, many, many hours even on my 16 core Honeycomb that still takes several hours to complete. Also on my Honeycomb, this right here is a virtual machine, not the Honeycomb but to give an example, I do have a copy of free BISD 14 current that I completely self-compile at that point. So it is possible just like on the next 86 to run a completely self-compiled environment for the kernel world everything and upgrade it through compilation rather than new binary installs. So I just wanted to show off that like at this point it is a very stable system. It is, I've been doing this for about a year running the same copy of free BISD current and I've taken it from through 13 current into 14 current and it's still running just fine. And I actually have a bunch of jails on it that are running 12 and 13 stuff for different testing and compiling. So if you wanna do like the full manual style free BISD on ARM, again, it's the boring life it is fully supported. I also do fully expect to get completely yelled at for running this as root because I was just doing a quick test but yes, I was running these as root. So my personal project, the latest thing that I've actually been doing with one of my ARM virtual machines specifically on my Mac M1 with parallels is I've been porting the Linux C7 packages or ports over to ARM. As you can see here, it works. Linux later does work on ARM that here's a number of Linux binaries fully working just fine as you would expect. Nothing too complex yet since this is still fairly early in development. Right now there's about a hundred, right around a hundred total C7 ports and all but five of them currently compile on ARM. The few remaining ones I just need to go in and do a little bit of a hackery. It has nothing to do with free BISD. It has to deal with working around upstream bugs in CentOS itself. They actually screwed up their other ARM port. I do understand that C7 is very old at this point and we wanna move on to something newer but this paves at least an initial pathway to say, hey, Linux later works on ARM and we have a template for it and we can apply this template to other distributions and getting more Linux binaries working on ARM. If somebody wants to really, really help out in these efforts, there are quite a few ports out there today that have the only for ARCH listed in them and they may list AMD64 or i386 and the reason that they have that is because they may not have worked with an earlier version of free BISD like back in 12 but since free BISD 13 hit and it's now a tier one architecture, there have been numerous bug fixes and the only thing that's preventing certain ports from running is the fact that it's still gated out and so just go through and look at the different ports like you can look at fresh ports to see if there's a compile for ARM or not if there's one for x86 but not one for ARM just go and try flipping this one and only flag and seeing if it compiles and runs and more often not you're gonna see that it will work. So this is just, it's a fairly simple task but it's a time consuming task to go through as many ports as possible and try to get them updated and validated. So right here, I know some people do have some questions about the performance of virtualization. This chart I built several months ago so it's not on the current version of Hypervisor and I do know that they fix some things but just to give you an idea, the three sections of the graph, we have the Raspberry Pi performance, the Honeycomb performance and then the MacBook Air on the M1 performance and sadly I don't have anything to compare the M1 to right now mainly because I only have virtualization and we don't have native free BISD even though I know the BISD and the Linux communities are actively hacking and trying to get other operating systems working on the M1 processors which is gonna be awesome, quite frank. But overall you can see that there, you have maybe a five to 10% performance loss using a Hypervisor except in the case of four core on the Raspberry Pi and I haven't quite narrowed down exactly why that is yet but it could come down to the way the scheduler handles that system a little bit differently or it could also be the fact that I was using a different storage system and I haven't had the chance to go back and look why those results at four core Raspberry Pi flipped in their performance why virtualization was faster than bare metal that shouldn't be the case. So that's gonna be an interesting point of research to go back to at some point when I have some more free time. And lastly, looking ahead to the future VMware just this past week announced that their Fusion Tech preview for the M1 processor is now in private testing and so anybody can just go and they have on their Twitter account or somewhere their blog, they mentioned that you could just email them or register and sign up and they're taking a select few amount of people from the community to test the Fusion M1 Tech preview just so they can get an idea of how different operating system will work on it. And this screenshot is from their official blog and as you can see like all of these virtual machines are running at the same time on a single M1 system and you have a Debian Ubuntu 20, Ubuntu 21, Kali Linux, Fedora Linux, VMware Photon which is what they use for like vCenter and of course down in the bottom there we have FreeBSD 13 and so that is something they're actively looking at and making sure that it is continuing to work and they've been really good at communicating with us back and forth to make sure that everything does work properly for the FreeBSD community. So that's about all I got right now. If there's any questions, I don't see any in the chat. Any in the shared? Will Linux later move to C8 that soon end of life or would something like Alma Linux or Rocky Linux be better? Is in Debian Ubuntu on the roadmap? In terms of that, I don't have a good answer for that because it's more or less what everybody in the community as a whole wants to use for which distributions we wanna support. I believe Bastille Linux or Bastille VST right now is working on Ubuntu Jails and I think that's really, really promising. So I'm hoping to start playing with that and whatever's done for x86 will most likely work flawlessly on ARM at this point. Any information about Beehive on ARM? How does it relate to ESXi? Beehive is, I've heard that there is an experimental version of Beehive for the ARM platform. I have yet to personally test it and I don't know the status of it. The last thing that I heard was at the FreeBSD Dev Summit which I think was back in June and I haven't looked into it since then because I've been busy with other projects. So those are the only two questions that I see in the chat here. I hope I answered both of those to the best of my knowledge. So yeah, like especially with which distributions it's up to the community to decide if there's a distribution that you want start pulling it in and seeing what we need to get it to run. And like I said, even if you start on x86 we can move it over to ARM effortlessly. Most of the work that I've done for CentOS applies to any distribution. It was more or less just getting the package repositories saying, hey, there's more than one architecture. There's more than just x86 architecture. There's other architectures. And so as that work is being done, it applies there. And honestly, a lot of that work, the way that things are being pulled away from x86 into ARM and putting those gates into place, those same gates also apply to risk five. So I'm super excited that more and more people are getting the risk five as well. Cause everything that I'm doing here will apply to there as well with a very simple changes. So I think that's about all I see there. And I'm not seeing anything on our Discord either. So yeah, if you also wanna join our Discord that would be really, really awesome. So just join our Discord. I'll bring that up one last time. Thank you everyone.