 Good morning. How's everyone? Oh, good. Thank you. Let's give it a few more minutes. Hopefully we have a little bit more people. So I posted the meeting notes on the chat. So if you can just add yourself as an attendee. All right, we got like six people. So hopefully some other people will join later. So we got Tom and we're going to do that. Yeah. Look up. You're going to present. Packet. Sorry, you're from packet and you're going to be presenting. Tinker bell. So yeah, just take it away. I'm unmuted now. Hey, thanks for having us. We are here to talk about Tinker bell. It's an open source project that we're building at packet and Tinker bell. I think it takes care of all the automating and management of bare metal servers takes care of provisioning. And I think John Luca is going to go a little bit more in depth and show off some, some of the cool power that Tinker bell brings. Yeah, I mean, it's, yeah, it's, it's mainly like, you know, I, I joined packets like few months, few months ago and it was mainly, you know, my thinking was like, when you get a server, you like take it out from, from the blocks and how do you make it do something useful? And yeah, Tinker bell tries to, to answer this question and giving to the user the flexibility to, to do whatever they want with the, in terms of like installing the operating system and running actions from having like a server that doesn't do anything until it runs like a cloud needs that we all know, coming from like, you know, boom to or AWS or any other cloud provider. So the idea is to, is to close the gap between like a new fresh server that you get from a shop and like the cloud, let's say the way we do it like is via API because that's what that's what we learn like in the, in our cloud like journey. And yeah, let me share my screen. See, if you have any, I know if you had a look at it and you have any questions, you can just stop me. Otherwise I, I'm trying to, I will start the documentation and share a little bit what we, we, we did so far. So as Tom said, it's an open source project. So it has its, its home on GitHub and it's made of like microservices or services called them like you want. And the thing one is the, is the main one and it contains like, it provides the, the CLI and the team server, the team server is the equivalent of like a control plan in the Kubernetes land. And it's the, it's the after that receive the requests, store them and you know, serve them to the worker and the worker are like your server. So usually when you have a worker, it boots when you have a server like you do, you power it, you power it on and it starts and the boot starts. Nowadays, all the servers support like network booting that is technology is the, is the, is pixie and we use, we leverage that as to serve our base operating system. The base operating system we serve via net boot via IPXE is called Aussie and is a, it is like a variable open source as a project as well. So the first stuff you, you see when you start like a worker is like Aussie itself. I don't find it here. Yeah. And it is an in-memory operating system based on Alpine. So you, like the server goes on and it makes like a DHCP request asking for, you know, a peer and thinker bell serves DHCP servers. It is called Booth and that is available as well as an open source project and what Booth does is it responds with the Aussie operating system. So in this way, the operating system starts on RAM and it's available. So it gives you a shell that you can use. Inside Aussie, there is Docker that we use Docker has runtime and as you can see the documentation in practice what you do is like you specify a template and you transform the template into a workflow. So the template is in the ML format here. And as you can see, it recalls a little bit like Docker itself because it is used like the image. So what we are saying here is like worker with the MAC address executes the workflow from the template. This template here. Can you make the screen a little bit larger? Sure. And then I have a question. So do you have like an architecture diagram somewhere in the document? We have it, I just don't remember where it is. I didn't want to break your workflow. No, that's fine. Thank you. Yeah, so as you can see, like this is the architecture. So there is the control server that is the provisioner itself and is the think server. So the thing servers provide GRPC and HTTP API and it contains like the server contains all the workflows. It stores all the hardware representation. So you can register your hardware inside the server and you start the templates via CLI. When the worker starts, as I said, it makes the DHCP request, the DHCP response and it starts via IPXE, the OZ operating system. It is an alpine. Inside the alpine there is Docker and from there the worker is able to execute every task that you ask it for. One of the tasks we have is the award one so in the documentation, but it's not super like fun. Another one we have is called think level Ubuntu and in practice brings you to a fully persisted and working Ubuntu operating system. And as you can see, like you have the Ubuntu template that is the same as before. It's a ML file that describes all the steps and the actions that has to be done. So the first stuff you have to do when you install a new operating system inside a server is to wipe the disk. So there is an action that like wipe the disk. There is another one that makes the partition because we have two partition disks and like set up the swap or the home directory and where the operating system is. You have to install the root of file system and configure the grab and start the clouding. That's almost what happens when you install like every operating system. And if you want to know what like each of those actions does you can follow the foldering and as I will show you, every action is a Docker container and the Docker container can be as complicated as you want. So the wipe action is executing a batch script and the batch script is way more complicated but what it does like it's wiped the disks. So it arrays all the data and it prepares the feedboots loader for the operating system itself. And I mean this is like in the high level like very high level how Tinkerbell has an old project that works. And if you have a look at the repositories we have you can figure out how we kind of like split the responsibility of the project itself because in the ID you can use a different like in-memory operating system if you have your own one. So you don't need to use OZ. You can use your own one technically. So that's why it has its own repository with its own. We are building a release lifecycle for every project and so on. The Higo repository is its service metadata. So when if you're used to, we are all used to cloud computing so we know that we can call like an IP from inside the server and it responds without the metadata of the machine itself and this is what we do with that microservice. It is available for every like machine you start with Tinkerbell. Boots is the first interaction like a server has and it is a DHTP server. We leverage like as I said like pixie, pixieboots. So we're doing that booting and that's how it works. You name the machine starts, it has an address and it gets the address and it has an operating system, a temporary operating system. We also have like microservice, a microservice that help us to manage BMC to interact with BMC. So we can like switch on and off like machine programmatically. We started to build a UI, a graphical UI. There is a portal and yeah, we are collecting workflows now and we are trying to figure out how to make them reusable in a good way. Luckily for us we decided to use like OCI and Docker images. So technically we can use like that. I have a question. So the OC is just an in-memory operating system but that's not the operating system that is running on the machine, right? Yeah, you're right. We use that operating system only like has a first way to run actions on the server. So as soon as you, as soon as the, as soon as thinker bell workflow wipes the disk, install the Ubuntu operating system, we configure the grab. So the grab will switch the boot from the networking one to the disk one. So from there you start from your disk that has Ubuntu or like Debian or whatever and like OCI is not used anymore because it was running on RAM so it's just doesn't exist anymore. Got it. Are you integrated with Kubernetes in any way? We got like a lot of requests from like the cluster API team to get an implementation with it but right now we are working on like the thinker bell core itself. So I think for the next couple of months we will keep working on like other days, life cycle and stability work that we have to do but the cluster API implementation is for sure a priority for us. Thank you. Other than that we don't do Kubernetes but we hope that with a community to build one very soon. And yeah, I think I don't have a lot to say more. Tom I don't know if you have any partners for us. Just that we're really excited about this project and we're looking for more people to get involved. If you all have any questions feel free to ask now or you can reach out to us. Both of our emails are in the agenda. I have a question a couple. The first question. What database do you support? We use, I think it's written somewhere but we have, we use Postgres right now. So any SQL like support, any SQL is the one we are using at the moment. Got it. And you also mentioned Docker. Docker is in a runtime. So after the main operating system is installed does the Docker remain on the host or is it also temporary just for delivery bits? Yeah, it's temporary only for the, for delivering and installing like to making all the action on the server. But after that. Thank you. You have your operating system of choice without any. Thank you. I think you broke up. I'm not, maybe it's my connection, but I don't know. I have a question. This is Diane. How would you discover what hardware you're provisioning on? I mean, if there are accelerators on it, or if there's something unusual about it, like a high speed networking or something like that, do you have some sort of discovery of what exactly you're provisioning on? Yeah, there are two different ways we currently support. One is that every hardware will register itself when. When we get the first DHCP request. And obviously with that, we don't really get a lot of information about the host. We just know that there is one host that has a mech address. That's it. Another way is to register in the tinker bell serve. All the hardware. And when you register them, you can, it's a JSON that you send. And you can, you can save a metadata has a, like JSON. So you can mark and label your hardware. Okay. So you've created this list of things that might exist in this metadata format. Is that also in that, in the, the GitHub repository? Just curious to see what that looks like. Yeah. It is part of the, of the team repository. Right. Server. Show you. Cool. Hey, John, Luca, you're breaking up really badly. Do you want to try killing video? Maybe. Yeah. Let me, let me change my connection for one second. I'm back. Let me know if you, if it works better or not. Okay. Thank you. So this is an example of the hardware data that you can register to think about itself. And as you can see the, the idea is mandatory as, as well as the, as the MAC address, because we use that as identifier to point workflow to, but you can store like way more stuff. So we store the facility and we store like the, the layout of the storage. And yeah, how do we want the partitioning to be? So you can, this is the way we teach like thinker bell, the, how the layout of our hardware. And as I said, if it also can be done magically from the, the first DHCP request, but obviously you get way less flexibility because there is not much to get from the, from the DHCP request. Okay. So they, they specify whoever is providing the servers, specifies it in this way. It doesn't go out and investigate. You don't have a script or something that investigates what hardware you're running on. No, no, no, no, that, that part for now it's, it's free. Or you have to do it. Okay. So like the number of, of CPUs and things like that. And if there's hyperthreading and all that is that, can you do that? Or is that something that you're planning? No, for, for at the moment there is, there is not, that part is not made covered by thinker bell. We are thinking about an inventory management, like a solution that will be like, I think it will come because we had the same like a question you raised, but we didn't get that far yet. So it's, it's under discussion. I think we will get at least a prototype that will, I think there is a, there is a reconciliation phase that happens when Aussie starts, that sends a couple of information like this, the architecture. And all those information are, are, are coming from Aussie because Aussie runs Docker and our like worker to demon that sends and reconciles those information. So we have a bit, we have a, something, but it's not a fully like, an inventory has, as we, as we usually see, but I think we will, we'll do it at some point. Okay. Just curious. It's not an easy problem to solve. There's so much, so many different flavors of hardware out there. And yeah. And so to just do discovery automatically, it can be difficult because you don't know what you're looking for. And in the case, right? So I think this is a nice approach. You have that. Yeah. Yeah. You're, you're right. It's also, we also took the direction of registering asking for the registration because we don't want for think about to take over, like all your data center when you start it, because it, it, it, it, you know, if you have the DHCP, everybody starts to get it. If you do not segment your, your network, so there is the possibility that if you don't configure it, like if you, if we do auto discovery in a very strong way, like think about, we start to provision. So you say what you want provisioned, maybe you have an accelerator out there that you don't want to include or something. If you put it here, then it would be included. Yeah. So that's why we have those two different and very like far away strategies. You can decide to use think. In up to discovery mode, or you can register your, your, your, your own algorithm by yourself. Okay. Thank you. So are you thinking also about maybe making this a CSCF project? Are you guys are interested or. So, um, we've been definitely been taught and talked to that Mark Coleman is kind of heading up what that's going to look like in the long run. But, um, that's definitely something of interest. Okay. We have a new sandbox process. I don't, I mean, I mean, we have the CSCFS. Sandbox incubation and graduation, but typically the projects are submitted first to sandbox. And then they stay there for a little while. And then later they want to, you know, become more of a real, you know, project against the more people are using and, uh, but some of the TLC members do a due diligence. And then they go into incubation and finally they go into graduation. So, uh, Right. Yeah. We've got a, we're working on, we're working on, you know, some stability issues, making these Tinker expanding with Tinker, Tinker Bell does how it works, what it offers. Um, as well as, like I said, getting some, some more stability there. Um, getting a few things off the road map. Into play and into production. And, um, then that's something we'll definitely be looking at a little bit more. Yeah. But I know, like I said, um, Mark has, um, has done a lot of work with a CNC F and we have a great relationship with CNC F. A lot of the CNC F projects are built on packet. So, um, I know we've got, we've got a deep relationship with, with the foundation and, and we'll be moving forward, uh, getting Tinker Bell over there as soon as it's ready. Yeah. And Lina just posted the link to the sandbox applications. So thank you so much. Lina. That's awesome. Yeah. So we have, we had another project that I present, uh, a couple of months ago. Uh, it's called metal cube. Uh, are you guys familiar, familiar with that project and, uh, metal what? Metal cube, uh, like metal three. Um, I am not. Yeah. I'm personally familiar, uh, with, we, yeah. Um, How would you describe like, uh, like Tinker Bell as a, I mean, I think a metal cube uses Kubernetes. But how would you describe and some of the differences? Yeah. I think I'm familiar for, for what concern our, uh, like cluster API implementation, because, uh, we, our, the packet one is, is like new. And we obviously had to look at the other implementation, but I don't have like, uh, Like. Practical, uh, I mean, I didn't, I didn't try it. So I can really do a comparison. Yeah. I mean, yes, some of the questions that get asked on the TLC sometimes when, for example, the projects go into incubation, like how are some of these projects are different? Cause I think a lot of times they kind of just want to fill some gaps. So it's like, you know, some of these projects are different because I think a lot of times they kind of just want to fill some gaps for some of these projects that don't exist. And maybe they want to promote a certain project in a certain way, like, like, okay, this is good for this type of things. Right. So, so maybe those, those are your questions to, to keep in mind. Right. Absolutely. Yeah. I'm also sure that somebody in the company has an answer with the answer already. I'm just. Right. Right. Anybody has any other questions? Anything that they want to discuss? Well, thank you for the presentation. It was really helpful. Yeah. And I hope, hope, you know, this can become a CNCF project in the future. Hey, thanks for having us. We look forward to working with y'all a little more closely. Once we get this ready to, to roll over into the CNCF. It's good. Thank you. Thank you. Thank you. Thank you.