 Didn't know I was waiting for someone to do it to do an introduction or do I start all right? Okay. That makes it easier I mean ideally people can join from outside, but since this was just approved on Friday, and I just announced it today I think the odds of someone actually watching it are going to be relatively low But yeah, like I said, I mean I poked a few specific people and I announced it, but I you know that was a couple hours ago So it's early enough in North America still the No, no worse All right, so that this is going to be incredibly informal So I have a couple of slides, but the idea behind this is mostly to get some like-minded people in the same room and talk about what we're trying to do and You know hopefully move from there um so Some of this is also around six. We've been trying to a combination of reviving some old SIGs creating some new ones and Because the a lot of the the old AI and short machine learning SIGs were Activity-less So we were trying to revive some of that The There's been a renewed at least a recently a renewed Effort to try and get the more the heterogeneous compute stuff in Fedora in particular There's momentum around the AMD's rock-n-stack There's question of whether that should be the same SIG whether it shouldn't be Sorry because this was just accepted on Friday. I did not have a whole lot of time to prepare um So there's like I said, that's going on is rock-m. We've got several packages Approved I'm not sure we're going to Hit it before I don't think I don't think we're gonna finish before Fedora 39 branches But I mean I'm always up for being surprised in that area One of the targets is to have enough of rock-m so that we can have accelerated pie torch running on Fedora and I think there's some other work being done on particularly blender to try and get blenders functionality for accelerators working with AMD hardware There are some very early plans around trying to get pie torch packaged in Fedora There I mean it's it's early enough that the the few things that need to be done are mostly in Fedora discourse instance Like one thing I wanted to get I don't know if anyone here has feedback for There've been there's been some Discussion on how to do communication I Requested a Matrix channel an IRC Given the number of people who have chatted in that it's either not known or nobody wanted it in the first place I'm unclear. Does anyone have any thoughts on? Is matrix even something something people you know still want? I mean there's no in an IRC either. It's just a it's dead And has been it's we're being recorded. I'll call out Red Hatter's anyways, right? So there are people at Red Hat who have Fedora counselor involved in Fedora Who there's an internal slack AI channel that people post on yeah Where they could just post this stuff on the public one or in either the chat or I have been You know gently nudging For a while. I don't know. I think more gently nudging would yeah, but I mean that's this for the internal stuff I don't think everyone here is right I don't mind there being an internal thing But a lot of that internal stuff could be brought out to there And I think that will help and then once that will make it not feel dead And then people will feel like it's a place to me Yeah, other people who are not red headers. I'm not blaming you if you are a redheader who should be called out here Yeah, I don't but other than the two of us I don't think I see anyone who's been a regular on the AI slack channel But yeah, I will then drop my mic and let somebody else No, it seems like a lot of silence. So it's either people don't care a whole lot or I mean there's no violent objection or One of the things that I've Honestly, I didn't come up with it. I don't remember who did Of fedora being a duocracy, you know It's only gonna happen if you put boots on the ground and actually do it And if no one stops you then it gets done Yeah That that sounds some yeah, that sounds like something I think Robin would say but Yeah, I guess I don't think I remember caring from somewhere, but it's been long enough. I don't remember where so I guess We'll just kind of keep going with that The other one has been a bit around and this is a some of a larger discussion around mailing lists versus discourse They think the current plan is to get rid of the mailing list that isn't being used and just kind of keep things on Discourse because that seems to be the direction things are heading in regardless of What anyone feels about mailing lists? There are practicalities involved That I think is really had really pushed me that direction is just it's the maintenance burden of mailman on the info folks So if there's no Thoughts I there seems to be more activity on discourse than there is in matrix At least within for the AI ML stuff One of the other The other things that the I didn't make a slide for it So how many folks are interested in more of the AI ML stuff? Versus the heterogeneous compute and anyone the other way around more interested in The other the more like the the accelerated blender That kind of stuff Or not just a you know rock M. You know like entails one API would fit into that as well There has been some Question around whether they should be the same sig or different sigs. So I'm just trying to get information for that I Don't know who would lead Who's going to well, there's that's always the question This is who's gonna put the time into into running all of it. I don't think we have an answer for that But the voice is who most wanted to see them separate are not able to be here So that silence in this room does not necessarily make consensus Yeah, I just I don't really care I just don't want there to be two small groups that are never able to find a time where they can both get together Never find enough people to do anything when if everybody would make one combined group there would be better Yeah, and so is there's just more I think a little more discussion to be had because there was one person in particular We wanted to see them separate. I don't think anyone else cared So it's gonna be honestly I think it's gonna be up on him if he wants to lead a separate group Then I'm not gonna stop him See Just more stuff of you know is this is are there other things that folks are interested in these are the things that came to my mind first in terms of You know both the AI machine learning and the the heterogeneous compute Some of the stuff is close to others one API is quite frankly a ways away from being packageable in fedora There's stuff that needs to happen upstream before that's something that could even go into the repose. So that's further out Like I said rock M. Hopefully we will have enough to run pie torch on By the time fedora 40 releases At least at the rate we're going And pie torch is an ongoing Discussion there is quite a bit of work to be done there in terms of dependency packaging in terms of Questions around how we can do it. What we can support You know whether Whether we like it or not Nvidia is the 800 pound gorilla when it comes to the scientific computing But the way they license their software it's not something we can distribute in the fedora repose so You know can we build things in such a way that you could just install part of it from Like rpm fusion or have directions so that you could download and do it yourself There are there are a lot of these questions that have not been answered yet They're kind of on the list of stuff to do for pie torch Do you know how far off their the driver open source driver? They release was supposed to be GPU Computing focused in I yeah, it depends on which part my it's with my experience with working with Nvidia stack for AIML the driver is not the problem The biggest hurdle that I've always had is the the CODNM The the neural network specific stuff that's built on top of CUDA That has the most stringent requirements on you have to have you know some version some range of version of GCC some range version of Glib-C and all that kind of stuff Where I don't think I've ever The times I've looked at it. You can't find those in the same Release of fedora. That's currently supported If that makes any sense so the problem with like pie torch and whatnot on Nvidia is in fedora It's not the driver part. It's the CU it it's the CUDA to a certain extent, but the CUDNM Stuff that's built. That's the neural network specific stuff that's built on top of CUDA. Does that answer your Part of your question Okay, thank you. I some of this stuff is Possible we could ship that here's a container or a container from somewhere someone had drivers probably Yeah, the I mean there's always there are ways Relatively simple ways to people can install the the closed-source drivers on fedora The someone had responded to one of the converse one of the discourse posts recently that Like there used to be a way to do it so basically the something called Nvidia Docker or Docker Nvidia I don't remember which way it was but basically it was a mechanism through which you could expose the GPU to the container And then you could have all the proprietary stuff with all the pinned versions of things you need in the container and Run that on a system that just had the Nvidia binary driver installed We can't really do Docker and fedora anymore, so that stopped working But there is a new project that is more generic that should be able to work with podband as far as I know So that is as far as I know someone is working on it, but I don't know all the details So hopefully that's coming that will be a way to do it Are there other things that fit into AML the heterogeneous compute that people are interested in Other than the stuff that I've kind of been talking about these are the things I know of Which is because I don't know about it doesn't mean it doesn't exist Wait for the can you wait wait for the room Do we have an object you define for the sake or is this the discovery phase of it? more the discovery phase more it's something that like I said the It's a combination of revival and creating The the old SIGs were dead We're trying to create something new that either replaces or Repurposes existing SIGs, so effectively. No, I mean that other than what I've talked about the trying to get Pi torch to the point where at least it can be accelerated on fedora without having to go to binary blobs So getting to start off at least with rock am I Think it will really help if we have an objective that can evolve over a period of time, right? So if we have an objective, then you you can attract more people It need not be Set in stone Can evolve right so that's one thing the other thing that I wanted I was interested I mean this again goes back to the offline conversation that we had about the infrastructure For testing some of these stuff So you want to bring that up or yeah, I mean it's there's Still a lot of questions one of the things that has become pretty obvious as we're working to package rock M is That it needs to be tested in an automated way It's gonna be somewhat fragile enough that Trying to do all of it manually is not going to end well at least not for my sanity So there are there is an open question on how we can test that stuff The weather Technically there is an AMD GPU available in Amazon's cloud that they may or may not allow us access to I Had problems getting to it personally Huh No Okay, I don't know it like I said I had I was very irritated with how difficult it was for me But on my personal account to get access to the the instance with the AMD GPUs And there's still an open question of whether it's good enough for rock out because it's several generations old and isn't on AMD's official support list for rock M I Thought I saw one that was much was newer and just AMD they have brand new Nvidia stuff There was a newer. Oh, yeah, I can't remember which one it was offhand though, and I'm too tired I thought it was like a like the v5 20 is the one that they have I thought which is like no It was not yeah, it was much larger numbers than that, but I don't remember the details But David Duncan is here from Amazon and if anybody can hook us up. It's him Okay, I mean if there's if we can do this cloud that would be much easier than anything else I can think of because the rest of it Because there's there's a lot there are open questions on you know What system we would use to test it my first thought is open QA if we do that is their room you know in the the racks that they use in In in Virginia for for fedora infra, you know Is their funding for the machines that our GPUs would be in can we get the GPUs to test with is this a good way to Out or good route to go There's just there's a lot of open questions that I don't have answers to The other thing for kind of connecting this to your list of what other people think things that people are interested in I note that Matt Hicks is interested in the open-shift data science kubernetes GPU computing thing and so Having fedora provide a good experience that ties in with you know kubernetes in the cloud for doing computing would probably be helpful in terms of Getting positive attention from our large sponsor to us not that everything is about red hat just that that's a Nicely aligned thing right there. Yeah Well, I mean you know peel it Speaking surely from a you know pragmatic point of view. Yes. I do work for red hat I don't know anything more than what we're talking about is you know if you want to get funding for something appeal to a sponsor that has money Going completely the other direction a thing I think is interesting is They like was a microtorch that that Peter Robinson was talking about which is basically building The models that will then run on like ESP 32 or a really tiny microprocessor and I think like that having fedora be an interesting development environment for that would be cool because I I can see cases where I would like like I have a I would like to be able to recognize whether it's a cat or a human going up and down the stairs at night and not turn The light on for the cat because the cat doesn't eat it But the humans do and that's like a that would be kind of a fun little project that probably could fit in that with some sensor data But also is probably also zero interest to red hat, but yeah, it would be cool There's some other interesting things that that Peter brought up on discussion or on discourse About you know some of the open CL stuff And trying to get it to run on new hardware, especially some of the AR 64 things But for I think for the moment like I said, I think the the immediate focus is Rockham because in terms of stuff that we can realistically do in a short period of time that is it You know, it'd be great if open CL stuff works in the future for acceleration It'd be great if we can get Nvidia stuff to work to be great if we can get Intel stuff to work but for in terms of the stuff that is probably Okay license-wise with Fedora that probably works that we can get done in a reasonable amount of time We're looking at Rockham for the acceleration and then probably PyTorch on top of that About the open CL it should be already supported, but I'm not sure how usable that is for like AI and acceleration I was talking specifically about there was a conversation on discourse about an open CL back-end For PyTorch, which there is there was an experimental one. There was some other support, but it's not anything that is There were performance issues. There were maintenance issues. So I Open CL aside. It was that specific back-end for PyTorch. I'm sorry. I didn't elaborate on what I meant Well, I think we only have a couple of minutes left if no one has yeah I'm gonna pass the microphone over to Jeff As far as hardware for Rockham, is it an extensive list of GPUs that are supported? I Wish I had a good answer to that question. So I will answer it with what I know AMD's officially published documentation on the GPUs that support Rockham There are three or four of them all of which are over $2,000 There are other lists. I've seen I know you don't only like that. That's just their list I know one of the other guys who's working on packaging had just got a 7,600 I think it was a set one of the lowest at lowest end of the current gen AMD graphics cards. I know you can run on other stuff. I don't know what the official things are some stuff works Some stuff doesn't so long answer. I don't really know. I wish there was a better list But from what I understand AMD fully intends to have it work on The stuff going forward. So the 7,000 series the current 7,000 series of AMD GPUs and going forward I imagine Rockham will work on In addition to what it already works on, but especially going forward any other questions comments before to wrap up Okay, well if folks are certainly interested in this, please, you know keep an eye on discourse Feel free to ask questions in matrix if you have other questions or want to talk about other things We have a couple more days. Feel free to come talk to me Other than that Yeah, I think that's pretty much it. Thank you all for showing up and adding input