 great Okay, okay, so we're filling in a bit Aaron couldn't make it last minute, and so we're gonna give his amazing talk and It'll be fantastic. We are going to basically talked about so the Ubuntu community Has grown and Waxed and waned over the years and we're seeing a really cool research and so we're also The Ubuntu community council, which I'm on as well as the canonical community team and a lot of community members and volunteers have been working really hard to to Expand and adapt the online community in ways that bring people together and really reflect the Ubuntu The Ubuntu philosophy so Without further ado except I'm going to put on a wireless mic. So I don't hold this And we're getting slides up. So about two minutes. We're gonna talk about that Another interesting talk by our very own Simon Quigley. So yes, why don't we keep going with that? All right In HDMI we trust It's okay guys, I promise. Yes, so we may need a B help for the projector I Hello, can everyone hear me thumbs up. How do I sound in the back? Thank you. Great. All right. Okay. The HDMI works, which is not common currents on this laptop. So I'm happy Display port it does not Yeah Okay slideshow Okay, let me get adjusted here. I might clip this thing on There how do we turn it off so I don't accidentally get hot mic or something? Okay, sweet. All right. Thank you. So hello everyone This is first talk to give here. It's a little bit interesting So originally if you actually looked at the schedule you would see that it would be originally given by Aaron J. Prisk he is a Bunch of member and he's also on the canonical community team. So basically his role is that he is very Active with the community. I think there are multiple people here myself Nathan Simon pretty much anyone that has some kind of exposure to a bunch of community has kind of been able to Experience the work that him and his team have been doing and you know really working on kind of improving the Ubuntu Community for the better and just kind of you know being able to support so His original talk was going to be creating a community for the next generation So one thing that I first wanted to do here was just kind of introduce Aaron So I did give a little bit of a kind of Overview of who he is, but you know Aaron Prisk that's a photo from him at the Bantu summit Which he helped organize. He's on Launchpad and he was on the scale Returning Ubu Khan at scale organizer community This year he has been very instrumental He was doing a lot of work kind of going back and forth with the scale organizers really You know kind of collaborating and working with that team, you know, he's a super great guy and all that Unfortunately, he had an emergency and so he was unable to make it But I just want to take this time to really you know kind of highlight You know such an impact that he's had and you know kind of understand that without him and all the other Organizers, you know, maybe to my own horn a little bit here But we wouldn't be able to you know bring this event back to scale and be able to kind of be in this room with you all here today And so if you're kind of interested about what the picture is Aaron and I we were kind of hanging out one day and we decided we wanted to play with an AI image generator So he decided to kind of go for the futuristic steampunk look And if you're wondering why he has turkey in there with him He actually has a pet turkey that he named french fry So if you go up to his farm in rural Pennsylvania, we actually live kind of close to each other Considering I'm that based out of the city of Pittsburgh He he has kind of he's a bit like Joe exotic, but a lot less toxic as I put it So he has a turkey. I think he has some emus. He has a couple horses as well very interesting guy He's kind of who I want to be in 10 years So yeah, you know now that I've kind of done a little bit of talking about Aaron, why don't we kind of go ahead and highlight who is giving the talk here? So Nathan if you want to kind of take point introduce yourself real quick So this is my 16th scale. So everybody knows me but for the benefit of the thank you at the benefit of the live stream and and new people So I've run the event you booth for 16 years now and we've been doing ubercon for as I just said We forgot to count but about 10 or 11 years I'm a I was involved in the Ubuntu, California local community team local for short because we're crazy about Ubuntu and The I ended up on the local council helping local community teams around the world And I'm currently on the evening to community council Which means I get to work with cool people canonical Whose job it is to help me figure out what we can do how they can support How canonical can support events like ubicon the booth and the speaker track and Help arrange usually Richard and I Arrange the talks and he's really good about working with Getting speakers and helping figure out. What's the what's the angle? We want to focus on this year any year And then I work usually a little more closely with canonical specifically And so I had the deep pleasure of working with Aaron For a couple of years now and So I try to kind of Put all the pieces together with Richard's help and a lot of other people's hope And get volunteers for the booth and then sit back and well, I work at the booth, but Basically sit back and watch all the cool things happen along with all of you So that's why I'm so happy that you're all here because everything I do for the ubuntu community is In support of the community, which is you guys as well So Thank you, Nathan And so now I'll introduce myself here So I kind of like to consider myself a relatively new member to the ubuntu community So I guess a little bit of an origin story about myself. I actually didn't start learning how to program until I was 17 years old And why I started learning the program. It was because my aunt gave me a Target gift card for 20 dollars and using that 20 dollars. I bought myself the book called coding for dummies And now I'm standing here organizing ubukan at scale. So quite a bit of I guess say ascension there. I like to look at that at that point when I was 17 I was still very much interested in pursuing a degree in sociology I like to bounce around a lot But so kind of giving some further context, you know, I went to college Kind of started out as a computer engineer, you know, I really wanted to do computers But I found that I didn't really enjoy kind of the hardware and sitting at a cubicle and all that And I kind of wanted something more interesting and challenging where You know, essentially I have to, you know, be someone who understands how computers work understands how to write code How to develop applications and kind of come out and go out and speak to people And understand what their challenges are understand You know kind of what problems that they're currently having, you know, how are they trying to solve the problem currently You know, there's a solution even actually exist And you know, how can we kind of use and leverage the power of technology to kind of make a solution That's better for everyone and so I was originally been at that point interested in pursuing my doctorate in informatics So kind of understanding like the science of data And all that But then, you know, kind of interesting one night I had a long conversation with myself and might now fiance and also You know inflation was bad at the time and decided that I wanted to instead go into industry and then at that point I was installing Ubuntu and I saw we are hiring on the website and so I was like, oh, this is kind of interesting So I clicked it. The next thing, you know, I was like, I don't know what to expect and then I actually heard back from Mark Shuttleworth So I was like, all right, you know, I gotta I gotta get on that. So and you know, basically now here I am And so you might be wondering on the slide, you know, what does the not so ancient elder of ubuntu hbc Mean so at canonical my responsibility is kind of working on the supercomputing and high performance computing team Um, traditionally, you know, we found that ubuntu is kind of underrepresented in that space You know, there is a bit of devian and ubuntu deployment, but it currently is largely mostly enterprise linux and other You know custom distro builds So, you know, I kind of you know taking the problem in charge at hand decided that, you know I wanted to create a community around high performance computing on ubuntu And so we ended up forming a community team. Um, I guess very straightforward ubuntu hbc And then the reason why we call ourselves the not so ancient elders is because most of us are not that old so Myself, I was born at the turn of the millennium 2000. So I'm still it's very easy to figure out how old I am based on what year it is Which is quite convenient. So yes, and I was on the ubukan At scale organizing committee. It was very fun being a part of it. I was actually invited to do it with erin I had helped out with the 2023 ubuntu summit And at that point, you know, I really wanted to work with it because you know, it was kind of an opportunity for me to Really engage with the community, you know, I like nathan. So I like to spend time with nathan I like to work with him. So we would have a call, you know, every other week Um, I think george as well richard, you know, really great set of guys, you know, who have been You know stalwarts of the community, you know, nathan's been around for 15 years Like that's really impressive. I was like nine years old at that point. So I'm sorry if I aged you a bit But um, you know, it's kind of like people who I've always really looked up to and thought You know are just amazing people and it's like, oh, you know I have this really great opportunity to kind of now engage with these folks So now that we've kind of gone through the introductions here one thing that I really kind of want to say Is then look back at the title of this talk, which is so what does it mean to create a community for the next generation? And so this was a very interesting conversation that I was having with erin You know coming from my perspective of someone who was born in 2000 You know very easy low hanging fruit there is that I don't really like using irc that much I find it quite dated, you know, I'm one of those cool, you know, I guess young kids who Doesn't know how to spell and likes having an edit function and all that And I like being able to embed videos and send emojis and format code using markdown and all that Um, and so we kind of got started, you know discussing like what does it mean to create a community for the next generation? How do we kind of get the next group of individuals coming up, you know through school through university through industry and how do we kind of Make them interested in contributing to open source and being a part of the Ubuntu community And there's another issue too. So for example, I though I haven't used it lately. I love irc. I think it's fine I love emotes if someone gets out of line, you just slap them with a trout. It's just You know, there's a community there. It's been around since 87. I've been using it since 94 I'm comfortable I don't understand social media. I can sort of interact with it. I can use it, but it's something and then some of the The bird site exploded and um now in the shrapnel and debris, there's other sites and there's federation, but not and so Um, so I had a grasp of social media and now it's much harder and now I don't so Well, I don't love irc. I do like irc. I like mailing lists. I like lots of things some of the new stuff I don't know So now there are two things to balance because we all need to be in a community We all need to talk to each other or we're not a community. So that communication thing is Not well, we'll just grab the latest apps. It's a lot more difficult than Just that and so that's one of the reasons that the Canonical community team and they've been to community council have been focusing on this issue of how do we build this next generation of the community platforms that can Uh keep going and thrive into the future Yeah, and you know, I think that's a great point and you might even say like what is like the impetus for having this conversation Why would we say it and you know, let's face it. Um ubuntu is turning 20 years old this year. So, whoo Still not old enough to buy a drink in the us. Unfortunately can buy a drink in europe if we go over there And we also can't rent a car without a fee Which is very difficult If you're going to a city where the airport is like 20 miles away, which I cannot stand That's like pittsburgh almost. It's impossible to get an uber or a bus Into a pittsburgh proper from pittsburgh international airport Yes, so, you know kind of the reason of saying like ubuntu is turning 20 years old Why do we need to think about this? Well, you know, I think 20 is a nice number to kind of look back You know take a time to look in the mirror, you know that picture there where it's supposed to be a mirror Kind of looking up at the clouds, you know symbolism there abstract a little bit You know, this is a great opportunity for us to really take time and look back and reflect on Kind of some of the successes that we've had, you know with the community some of the challenges that we've had And honestly, you know also looking at some of the failures that we've had as well and kind of Looking at that and seeing like how can we learn from that? How can we come back and build better? But then I also think that it shouldn't just be all doom and gloom, you know Kind of being like, oh, I can't believe I posted that online when I was 15 years old, you know or something like that But also look at you know, kind of like this is really a great opportunity for us to look forward and kind of think to the future of like You know, where do we want to take this thing? Where do we want to take community? How do we want to you know be able to build a successful community? You know kind of keep the momentum going but also make sure that you know, we're creating new opportunities one of the tragedies for me as a writer, but one of the Lovely things about me is everything else. I am and probably the world at large is that Nothing I wrote and posted when I was 12 and 13 on dialect bulletin boards exists anymore. So silver lining Yes, and so then you know kind of thinking about the community of the next generation and thinking to the future The question that we should really ask is you know, what should the next 20 years of the ubuntu community look like You know, how do we want it to take shape? You know, what do we want to accomplish? You know, what do we want to keep doing? What do we maybe want to stop doing? Obviously certain things? Um considered, you know, some things are feasible and attainable and then other things are would be nice to have Um, but it's a great time for us to really look and say like, you know, the world's changing I mean just look for example of like chat gbt, you know companies want to cram ai into everything now So that's very fun And then also kind of looking at like kind of as you know These trends and technology change and these trends of like, you know Maybe the evolving definition of what it means to be open source You know, how could we kind of be able to you know, adapt this into the community and kind of continue On this natural healthy growth Right and the important thing to know is it's not um, we're not we're not let's make a 20 year plan But let's make sure as we reflect it's been 20 years. We have all that experience What have we done? What's worked? What hasn't and what new technologies are coming along now? And how do we make sure that we are evolving and and on that right course because in 20 years from now The landscape will be completely different. Um, you know, we don't have it'll be a discord But in our you know retinal implants that we see all the time or something right? We can't plan for that But we can make sure that we are not stagnating not standing still that we are learning and adapting and not just serving The current community members better, but also being inviting and serving new members as well Yeah, and so now this might be a little bit Abstract here, but I like I definitely like symbolism. And so Effectively what this is a photo of is one of the three rivers in Pittsburgh So if anyone doesn't know this interesting little factoid about the city of Pittsburgh, which is where I'm from Is that we have over 400 bridges? We have the most bridges in the world in our city That's mostly because we have the Allegheny the Manonga Hela and the Ohio River And used to have very strong, you know manufacturing steel industry And so, you know needed a lot of bridges to help move all that coal and steel And so You know, why did I think that you know a photo of say, you know the Pittsburgh river with all the bridges crossing traffic to nightmare By the way, but that's not the point Is you know What's the importance of this and I think you know one thing that we should really focus on for the next You know say in the future is how do we build bridges? How do we create bridges that allow people to Get involved in Ubuntu and how do we kind of reach out to communities that maybe traditionally haven't been involved as much? Um, so you know you can kind of look and see like oh, you know We have like the techies, you know who like the packaging and all that and kind of like the really You know code and engineering aspects of things where it's like, you know, we just love the raw Technical aspect, but you know, how do we kind of create opportunities for say like local communities? How do we create opportunities for schools? How do we create, you know opportunities for say even Companies um to be able to get involved in the community and kind of be able to contribute And be able to kind of spread the philosophy of ubuntu One thing that I just really kind of want to dive into And kind of how we're having this conversation around building community for the next generation is kind of look at a lot of the Recent successes that we've been having with the community We've been doing kind of a lot of work at least over the past year kind of helping To adopt new initiatives, you know adopt new platforms with the community and you know Just wanted to take a chance to really kind of reflect on that um and Understand like you know what went well and you know kind of like how this is good for us going forward So I think the uh First thing that I really wanted to touch on and probably um also my most popular post on mastodon by a mile Um is the recent um adoption of the matrix platform for the ubuntu community So if you aren't aware of what's been going on there, um basically over the past couple of months We've really been working to kind of make matrix the first class on platform for communications within the ubuntu community Um, why was necessarily impetus for this? Um, well, we looked at irc and kind of you know saw some challenges We were having there The chat platform and we kind of looked and started seeing like where are other places that people kind of want to have You know synchronous communication or even asynchronous communication and so We went out um looked at a lot of you know possible Chat clients so for example, you know, there's like slack discord all that But the real issue of a lot of those is that yes, you know They have the features that like a lot of new community members want like editing You know text formatting and whatnot But the problem is is that none of them are open source disc courses or discord is an open source You know slack is an open source Telegrams also not open source That was a popular one too that people liked using And instead we kind of then looked at matrix and kind of the ecosystem around there And what we really liked is that it kind of was able to Present the you know features that folks wanted from a chat client, but also, you know still kind of Was in line with our philosophy of you know, it needs to be open source You know And we need to be able to be you know contribute to it and kind of feel like we're you know helping You know push open source forward and so we started adopting matrix and Nathan so the interesting thing is that uh a lot of us so the Planning for this year for scale, uh it actually took place on telegram and a lot of other planning Planning for the bigger summit I'll take space on telegram and and so you might ask well Why would you be into community members and organizers and so on why would you choose telegram? the reason is because Uh when the bintu phone came out a telegram client came out like really quickly and so we're uh Everett canonical was dogfueling the bintu phone And the mobile os and that's how I eventually decided to grab telegram because I wanted to talk with them and Get all the inside scoop about the phone project and so on and um It was very useful and it was one of the easier things to talk my friends into because of course We're all using text messages and facebook and disc I think discord was still early on and lots of things I'd be pulled dragging and screaming to discord by My kid and his friends and so on my friends kids so Just to play minecraft So telegram kind of grew because it was there and it worked. It was really decent has a lot of great features They the fact they've really pushed the last half year and it got some really good features but um in the end you know a lot of people don't want to use it because of The company who makes it because it's a proprietary and so on And so we had a chance to uh look at What are the what are the alternatives and so uh, yeah almost all discussion um has gone now uh from Telegram this year on a bintu to matrix as far as the high level We're making stuff happen making plans and so that's that's why telegram and that's also why matrix we needed to We need to have something like that um and matrix with its federation and so on Home servers you control your data. It's open source. It just made a really good. Uh It was a really good proposition And it's getting to the point client support and so on that it's very usable. Whereas a couple years ago I didn't find it so much so Plus no one else used it. So yeah, yeah, that was a big thing folks didn't use matrix for a long time. Um, but um, you know one thing um, that was very I think unique about this, you know transition to matrix um is that You know when we kind of started moving over to it and adopting it as a mainstream platform for communication Um, we found that not just you know ubuntu as an operating its system itself was you know the only Community um that kind of moved on to matrix. So, you know, for example, you can look at you know, maybe say traditional things like You know support. Um, that's a big one a lot of folks, you know like support, you know You could say oh when we adopt matrix like we're just creating a support channel For the operating system, which is not fully what happened and actually What was interesting is that we were able to bring over, you know, say ubuntu as an ecosystem So like a lot of projects that you know, you can maybe consider friends of ubuntu or kind of adjacent to ubuntu Um, we were also able to kind of bring them over and they were very interested in adopting matrix as a mainstream chat client too um And they really looked at it as kind of an opportunity for us to all kind of Fly under the same banner um of ubuntu and so actually if you do go on matrix if you access our matrix space Um, you'll see that it's not just you know kind of operating specific desktop specific stuff But actually there's a lot of sub communities in there as well So for example, just pointing at some of the maybe the logos there is that we have a pretty active snap community Um on matrix. So for example, the snap crafters are on matrix. We also have you know Snapcraft or the starcraft team as well. They're sitting up here Um And then you know also snapd as well is on there If you're kind of looking at that little squiggly snake thing on the Far right side. Sorry. We're actually now. It's the left for you folks That is our juju community. Um, so we kind of have the You meant uh stage right stage right stage right. Thank you, Nathan So, um, we also have our juju community. So for that as well, it's kind of more the devopsi cloud Focused community. So, you know, you could say like identity. So like, you know, um, you know ldap That's just basically the easy example. We also have like observability. So like, you know, grafana, prometheus We also have like telco And then some other communities that we have as well as like open stack For those of you that don't know, you know, ubuntu is very used in open stack We have a very vibrant open stack community. So they're there as well We also have, you know, our loco communities as well. I'm coming on matrix. So for example, we have like ubuntu portugal We have ubuntu korea And a lot of others i'm starting up as well like ubuntu united kingdom And then we also have kind of the security minded folks as well On matrix. So for example, like, you know, just people who are nuts about security love security You know, want to help make the world a more secure place. They're on there as well We also have a bunch of touch and then some other sub communities as well Like our sef folks are on there as well, you know, they provide lots of help around using sef So that was a great thing about matrix is that with kind of that space functionality Or maybe similar to say like discourse groups is that we are able to effectively create multiple channels and sub communities that allow You know, kind of all those rooms to be in the same place So somebody says like hey, I want to join, you know ubuntu matrix You know, you'll be able to just kind of have a one-stop shop for all the open source communities that are a part of us The nice thing about that too is that um, and what this sort of to give an overview of what the effect is is that For internal communication economical, I think use matter most And so for example, I'd be at a summit or something and There'd be a group and they say I'd pass by and they say so would say oh, uh, hey, I was we were going to have dinner Something I was looking for you on matter most. Why can't I find you? And I'd say well because I'm not on matter most But you work for canonical. I might I do not work for canonical I'm like, but you used to I'm like, no, I never I do this for fun. I'm a volunteer. So Um, you know they run a completely different platform And so part of this adoption of matrix on their side that I have absolutely nothing to do with this because once again I do not work for chronicle Not that I wouldn't I just I don't the um, is that when uh, that communication gets to matrix and then All the employees are on matrix and all the community stuff are have rooms on matrix all these projects and so on because the people, uh, who have been working on Snaps or cloud or juju or any of the other billion of things that make up you've been to Um, and they're getting paid to do it and they're working on it They are also very they've always been very accessible very open to interacting with the community But they were working every day for work on a different platform And so uh now with them all being one platform it's not just the the rock stars like ken for example who's just Responding on discourse on a community hub and so on um But also a lot of a lot of really hard working engineers who just didn't Consider it that they're they're working. They have a job. They they you know, they were they were uh, you know They work really hard But now suddenly they're on this platform. They can see the other rooms. They're exposed to the community And they're starting to contribute and they're being more open realizing. Oh, hey, we're all working the same thing and so, um, I Have heard rumblings early early on that one that that was Something they were seeing and and one of the goals to make sure that this transitions is to make sure that You know some engineer who who just looked for a job It is doing really hard important work You can easily see that they're actually part of a global community of really really really smart people as well and opening up those communications so that um So that it's not canonical and a giant wall and to boom to and a giant wall And people who use a boom to we're all one community and that's one really a nice advantage of using matrix That we started to see and pushed for Yeah, I definitely agree with that, you know, once we started, you know, adopting matrix. Um, there was a large push Internally inside canonical to start using matrix is kind of the main Communication platform and you know part of the reasoning why we really wanted to do that was because you know, there's been a really strong push to Kind of be more community focused more engaged with the community and kind of encouraging engineering teams to You know say think about the community and kind of create a place where it's easy to get in touch with them Because yes, it is very hard when it's you know, you're trying to give feedback and you're trying to you know Ask questions or you're trying to contribute and then it's kind of like you're basically speaking to a brick wall Or maybe say even the black box where you basically just punch something in and then hopefully you know In the private, you know company chat, um, you know, it's discussed but now with matrix You know a lot more conversations can happen now in the open. Um, which is definitely a great benefit Yeah, the adventure community hub which runs discourse really Had that effect as well And uh, so matrix is really a force multiplier for that. So uh on the community side of things I'm really really happy to see that because I know how awesome Um everyone at canonical is and now a lot more people can too. That's an opportunity that matrix and discourse have allowed us And so Now that we've spoken um about matrix bit, um, the next, you know, really exciting thing that I kind of wanted to highlight that We've been working on this year is really kind of working to reignite. Um, locos, um, and the Um council, um, that's associated with that. Um, and so if you look at the little graphic here on Stage left of the slide You will see that, you know, we've really started having a lot of communities either reemerge or really start bouncing back, um from You know 2018 the pandemic where you know, everything just kind of really got put on pause And so, you know, kind of wanted to highlight a lot of the local communities that we have starting up or Really starting. Um, so one exciting one that we have is a bunch of nigeria We got some really passionate and excited contributors that came to the bunch of summit in 2022 Um, and they were like, we want to start a loco and it's like great Go do it and they did um, and they actually recently had an event I remember the name correctly. It was called the bunch to meet up africa. Um, it was hugely successful And then one of their main organizers too made some really Amazing artwork for that event and really introduced actually a lot of people in their community to ubuntu Almost great. Um, and then some of the other Exciting ones that we have is like our korea community. Um, they're doing ubu con On that because korea are rock stars And we could we could uh, we could cancel the next talk and talk and talk about all the great stuff They do so. Thank you. Thank you to them so fun to meet them at the the last two summits but um Ubuntu nigeria came in we're really excited I got support from I think uh, I think erin and morrow On the canonical ubuntu community team and got some support got some pointers and um, not only uh, was this a a giant reason to say, you know, the local community team effort has Weigned and kind of gotten really slow and we want to restart it and that was definitely a issue for the community council But in addition, uh in nigeria, I don't I don't remember the leader's name He went in he got a meet up got a lot of interest did some talks and as far as I know Did the first ubuntu install fest ever in nigeria and it was a massive hit and You would never know this is a new team because the artwork would create their their production value was fantastic The presentation was great. And so, uh That's a sign of what what people, you know, can can do. They want to volunteer. They're excited about any topic including ubuntu And it because this is ramping up and this is something we knew we wanted to do the community council got together and Was able to reform the ubuntu local community council Who are so community council focuses on a lot of things top level The local council members, um, I've done the work before it's it's basically you've just focused on those issues of support and Encouraging things and and helping guide people the resources For sponsorship and other things. How do they get a tablecloth? How do they get swag or or What kind of how do they get the logos to print banners? And so we've been able to put in place a council who In their just first couple months of existing of reexisting have done really Hard work and in getting people reengaged. So thank you for all of them as well Yeah, they've definitely been, you know, a huge Master's help and it's really great to see because like, you know, I want to form my own local as well Um, so we do have the ubuntu pencil Vania. I know that was pretty large community. I still see them on a lunch pad. It doesn't seem like they've had a whole lot of activity Um, but you know, erin and I as pennsylvania's, you know, we really want to bring it back There's I also think a very active linux users group as well in wester pennsylvania So kind of exciting to see like, you know, kind of a lot of folks around the world You know, just kind of feeling excited and you know, empowered to be able to You know start local communities in their area. And so last thing I really wanted to comment on this slide here is That qr code there that if you scan that using your phone that will take you to the local council matrix room on the ubuntu matrix So if you have any questions, you know, maybe I'll local like maybe if you were a past member or a past leader Who wants to like get restarted? Or you just have any, you know general questions about starting your own loco or, you know, just getting involved That's a great opportunity to just really, you know, have that synchronous start that communication I think they would definitely be happy to see that, you know, folks are interested and So ubuntu korea is also not only started they'd have their great ubucan events usually in sol or in korea They've also worked with the other southeast asian teams to help help them put on events There's uh I think it's ubucan asia. Yeah, that's right ubucan asia. That's uh, they were in uh, jacarta one time I think it's india india this upcoming year They've been working with all those other teams and help help and support them and uh, the ubuntu Peru In columbia, and I think venezuela have worked all together. There's a ubucan latin america No, that's that's a cross loco cross, you know international effort of a lot of volunteers So it's not just volunteer not just oh, uh, I helped set up a new council and they can help people It's not just that it's people on the ground who like people who come to scale for example, uh, you know You can if there's a university near you Talk to a professor or advisor see if you can do an install fest or or see if you can do a presentation You know and come into an it class or something It really is a grand roots efforts that helps spread it into that made it popular in the first place Locos are just a way of kind of you know, uh pulling uh pulling resources and advice And uh, so we're trying to get that spread again and active again like it always was And so you know kind of on the topic of locos and organizing events It's really great that you bring up uh like ubucan korea and ubucan asia You know kind of these community events and even ubucan at scale You know another recent success that we've really had is kind of starting the ubuntu summits So as I kind of like to look at it, obviously I wasn't around then since I was still in high school But there used to be the sorry ken There used to be the ubuntu developer summits and then starting back in 2022 You know, it was decided that we wanted to start doing those again But this time under a rebrand as the ubuntu summit So if you look at stage right here kind of the farthest Right there, you'll kind of see the castle there. That is the logo for the ubuntu summit that was held in prog in 2022 It was the first time that we kind of put on a style event like that And it was it was very interesting fun time And then kind of then in the middle there. This was our latest ubuntu summit that was in riga That event was a smashing success And if you're wondering what the logo specifically is there is a freedom monument that's in like the town square in riga And so it's kind of dedicated to like, you know, the pride, you know, and the You know sacrifices of the latvian people and so we felt that when we had the Summit there we really wanted to highlight that local community that we had there at ubuntu latvia And then kind of if you look at stage left here, you see a bit of a smirk emoji You know, we'll have some exciting updates coming out soon about the ubuntu summit 2024 So definitely stay tuned for that Yeah, yeah, I think we should get pass it in right Well, I I know it's not pass it in this year and uh, I'm not allowed to say I'm not allowed to know actually So we won't talk about that what we will talk about is uh, so in riga, I ended Giving a talk and I won't repeat it here because it's online and needs a view counts. Welcome to uh, ubuntu summit, but I Because the first one so I get invited which is great and they fly me out there It's a perk and um, they brought community leaders from all over the world all in one place And where that scale did sometimes uh from times and just busy years But in a concerted effort to get people all in one place because it's the talks are great The hallroom talks is fantastic and going out to the bar at the hotel after you know Getting the restaurant where you just gather and get to know each other and then you go off and you're on emails and Matrix and so on and you know, you know each other and it builds those bonds but um Ubuntu summit ubuntu events are very very very much like scale Nobody believes when I say how great scale is and I and and then if I try to describe ubuntu summit No one believes me. We all we all kind of know what it's like here. And so after the first one uh, I had some time to speak at uh at um in riga and so I spent a lot of time uh talking about uh Community and how we all all build together like I've mentioned those points here, but I also specifically mentioned ubucana at scale Because to me this is the the perfect way that these Conferences should feel like and I go to you've been to summit and it's this exact same friendly feeling um, and yet I think I think that um A lot of a lot of the tech events don't feel that way. It's like it's it's like a big family It's like going and spending a weekend uh with uh 800 of your friends you've been to summit feels like and We know here at scale and Jason doesn't know yet, but but we'll by the end of the weekend It's like hanging out with a bunch of 200 2 000 of your friends 3 000 of your friends. So The ubuntu summit is a way, uh, you know Just like you all come to scale and see all the communities and all the people working ubuntu summits the way that people can really come in and And and and not just see the people that they interact with but also All the cool different projects that are uh going on in the ubuntu community and it that camaraderie that kick started Ubuntu Nigeria and I think um, I think the korea team decided to go bigger with their uh ubucana asia And so um, it's all work in progress, but the event is something that's really exciting Yes, so um a little bit about that too. Um, and then something else that we wanted to highlight As well is new community teams So getting a little close to the end of the uh, a lot of time slot that we have here for this talk about 15 minutes left But just a few more slides. Um 10 minutes 10 minutes. Yes, so we have about 10 minutes. Um, you know 15 if we really want to you know mess with simon But uh, what is it? Um, yeah, so we had new community teams You know join us this year. So for one this was one that I helped start It was the ubuntu hbc. Um, if you don't know what hbc is it's super computing Um, and so we had a lot of folks who were kind of in the ubuntu community who were very interested in kind of You know using the distribution for petascale exascale Computing. Um, and so, you know, basically we decided to kind of all unite under this one banner of ubuntu hbc And it's been it's been very fun. Um coming up on our one year, uh birthday here You know first few weeks was learning how to walk and all that But it's definitely been a great time and you know, we actually have weekly community calls now that are open to the public So, you know, wednesday at a fifth or now 17 30, uh, utc. Um, if you want to talk with some fellow super computing nerds, we are on jitzy And so then the next community team over here that we have as well is our rocks community And so they're very focused on using ubuntu for container images. They're also a very Great group of individuals. Um, they have some of the star crafters in there So for example, you have snap crafts charm craft and you also have rock craft. And so They really enjoy making good oci images For different applications. So like databases and all that. Um, and then they have this really awesome utility called chisel Which is great for kind of taking apart debian packages and really kind of fine crafting your containers You can get a minimal small image as possible And yeah, I think that's uh, pretty much it. I forgot to put the question mark there But if uh, anyone has any questions, we have about Say maybe five minutes here. Um, so if anyone wants feel three, um, also We do have some, uh, you know ubuntu merch over here So we have stickers for uh, kubuntu, you know circle of friends bantic minotaur We have some notebooks and coffee cups So, you know, feel free to walk on by if you want to pixel them on up while you're on your way out But yeah, any questions Yes, yes You know, we were trying to get that that great cube mic next next year. Yes No, we don't Thanks, Nathan. I'm not in charge of the ubuntu or working local, but How would we go about getting that on matrix? Yes, so if I understand your question correctly, you're wondering how you get your loco community onto matrix, correct? Yes, so in that case, um, if you want to pop into the, uh, kind of loco council Matrix room, um, you can send a message there saying like, hey, you know, I'm from x loco I'd like to be able to go ahead and create a room And then most of pretty much all of them actually have the ability to then go into the ubuntu home server And create a room for you and then they can kind of put you in it They can give you moderator powers and all that and then we also have some other Utilities as well like a moderation bot that helps protect all the rooms on the server Wouldn't there be the leaders of the loco I have to go through though? Yeah, preferably and so you can you can poke them for example If if that's the problem, then you can contact the ubuntu local community council and say, um, I need help finding the leadership Maybe if you're interested in taking over there if everyone's not there And uh, you can also get a category on the ubuntu community hub discourse and the matrix room. It can all happen At the same time Yeah, so I would talk to walter lapchinsky. He is a former member of the community council He is on matrix now. So we're we're looking to You know put a lot of this on matrix. I would say talk to him and see Yeah, so yeah, and then if you are you know, that does bring up a good question So it's like if you have a loco, you know, I do appreciate that folks are cognizant That they're like, oh, yeah, you know, they're used to maybe be a loco in this area I don't necessarily want to just you know, say get out of here. You know, thanks for the work Um, or thanks for all the fish if you read hitchhiker's guide to the galaxy But you know, we actually do we are working on you know with the local council A recertification process So if you are interested in starting up a loco and you can't get kind of in contact with previous leadership and whatnot The council can kind of help you out with you know initiating that process of takeover. So I've got a question. I'm I'm with the open SUSE community. Yes, but I'm kind of curious How do you guys deal with bridging the separate? Communities like matrix discord irc Telegram and so on Because I know it's been Fractious in our community, especially like with the old neck beards on irc and that fancy new Discord matrix thingy You're like, uh noise. Oh, no. So how how do you guys manage that? Uh, as they you've been to community? Yeah, so you actually do bring up a really good point. Um, sometimes You know, bridging has always been a bit of a hot topic. Um, some folks that I talk to are like just don't And then others are like, oh, it works. Um, and so for example, um, one thing that we've really found that's helpful Is like bridging say telegram to matrix. So we have a lot of communities, especially for like the summit Telegram's mobile client. Um, it's very good. Um, I quite like it myself personally. Um, and so there is I think it's like t2bot.io It's very helpful for kind of bridging. Um, there's some technical challenges with that Obviously, it was like a bit of lag But you know, we've kind of found a bridging the big thing that just really helps is either a saying like Having clear expectations like okay, if you want to bring your slack community That's probably not going to happen because the slack bridge doesn't really work that well. Um, irc We've bridged a couple of those communities And then one thing that we've also found that's um Really helpful. It's just kind of explaining the benefits of matrix, you know, a lot of folks, you know Kind of when you introduce something new changes uncomfortable But when you show them like hey like you have this problem, you know Maybe one thing where it's like people just like dump stack traces In irc or something and it's like really gross to read and you lose it We just kind of find like oh, yeah, it's really nice like you can just add like the three Backticks and you know specify the language format. So You know, we found that like we don't really want to use Bridging all the time because it sometimes doesn't really work well And it kind of degrades the experience on those previously existing communities But we also find that kind of you know first talking to those communities, you know Kind of gathering, you know kind of a summary of like the current problems They're having and then seeing how we can kind of you know curate their matrix experience to solve those challenges We find that really helps with Getting folks to accept matrix as a chat platform The other thing I think more broadly to the point So we have an ubuntu irc council that deals with irc governance and the policies and procedures and who gets a room and who doesn't and There's infractions and manages bands and all those irc things, right? Some of whom are just as capable on the new platforms matrix and so on some of whom are not and have no interest and so We the community or the ubuntu community council approached them and we talked about creating a Oh gosh, I think it's a communications council. Yes. Yes matrix council slash communications council and so that that kind of not just deals with matrix but also You know, well communications with irc with forums, but there are groups that You know that that already are in charge of those so we get a representative representative from each one And so now we have a group that can we have the focus groups that can deal with our legacy platforms? But we also have a way that we can deal with What are the other platforms we want to identify and work and when do they bridge and how they work they work together And that allows the people who love irc to focus only on that while still Being a part of deciding where we take things in the future. So, um, that's that Yeah, so we've got better communication between those different teams and they're not isolated and they make those together And it's actually we were like We were like, how will this work very well and in the ubuntu spirit It's actually worked beautifully and I can't wait to see What else comes out because I need recommendations for these things because I'm I'm old and I don't know So, uh, I'm not qualified. Yeah, and So that's our time. Um, thank you all so much for being here. Yeah, thank you Thank you. Thank you. Yeah, and thanks to jason for his amazing talk Thank you. Thank you for bearing with me, you know I know when uh, simon if you want to start coming up here. I just want to say one more little thing, but uh What was it? Um, when I you know heard from erin that he wasn't going to be able to make it anymore You know first like sheer panic, you know came over me because like erin used to be a k-12 director And I you know, you can tell because I just always trust him with everything So I'm just like, you know, man, you got me covered and all that So when I found out that you know, he you know really needed some time to focus on himself and recover I was like, you know, I still think like, you know, the content that you want to deliver in this talk is really important Obviously, maybe he probably could have developed, you know, delivered a little better because I had to make some slides last minute But I do appreciate, you know the the work that he has done the community team has done even the community council And you know the community at large and you know, I think it's important that we kind of take some time Here today to really, you know, kind of appreciate the work that you all have been doing So thank you I think jason did a great job And the fact that he was helping her in the background let me just jump right in So even though it was a short notice, um, it's been seamless and richard and I were never worried one bit So a couple of months next talk, um, with simon Thank you Before we start the next talk, um, thank you everyone who came along Give us a grab one Great job Thanks How do I how do I use one of these? Uh, let's put it It's is it like There's a snap that goes into the jack in the back of your skull that you're not gonna get the jack All right, so I'm in standstill. Yep. All right, so put those on your ears. Don't freak out And then I clip this in your pocket, I'm just gonna pick pocket you real quick. Okay, cool. There we go All right I'm just gonna talk I will say this that would be sensory How what? Like check one two So for our next talk it is my honor and privilege to Introduce the famous simon quick. No, sorry. Let me check my notes the infamous Simon And his talk is called open source is not just code Simon I tried to write this talk a couple of times. I tried one time twice three times and I just couldn't get it right Um, so I'm gonna speak just mostly off of my notes right here Um on december 27th this this past year in 2023. I published a tweet and I'd like to read it um Open source is more than open code and it's open collaboration and open community In open source projects you find a variety of contributors Some have been around for decades others have been around for months Some are paid for their work others do it out of the kindness of their hearts Some people are just starting out in the tech industry and need a place of belonging others have been in the tech industry for decades Given these facts it is important that everyone works well together when the opportunity to accomplish collaborative work presents itself And often forgot mantra in the linux community specifically is know your audience Understanding who you are talking to and why are you why are you are talking to them? Excuse me is as critical Is it if not more important than the message itself? People lose focus it can be easier sometimes to tell someone to just RTFM or ignore them because if they would just duck duck go the specific way Remember that you too were once that in an experienced On the other side of this dealing with people more experience than you remember that basic respect goes a long way I've met many people who are major contributors to critical projects And they just put their shoes on one foot at a time just like you and I If everyone feels good when contributing to an open source project It will continue if you let sour emotions get the best of you and everyone in the project It will consume the project along with it So essentially it's it's been a journey for me. I first got involved in the Ubuntu community at 13 years old When I was nine years old my my stepdad first showed me computers He first showed me his his method of Of working with windows machines and just helping others with tech support and that really did inspire me it was It was a great journey to be able to to look into that and I got very interested in technology at that age To my understanding at that point in time there were two operating systems. There was windows and mac os and So I installed windows because I didn't have a mac. I didn't have the ability to install mac os on a computer So I was I was working with windows and I found You know, I would try and break things. I would edit the registry I would try to tweak the panel in a way that I wanted it to work and it just wouldn't Wouldn't do what I wanted it to do Um At this point I really decided there has to be a better way there has to be something that I can use That is different than windows or mac os there's something that Can give me this ability this this this power to be able to customize my own machine Um, and I was doing some research and I found this thing called linux now I was about 10 years old at the time maybe 10 or 11 and I found this thing called linux And it was it was interesting to me Um, you know, I had I had played minecraft for a very long amount of time In fact, I was good enough with redstone where at about 11 years old I built an entire 8-bit computer in redstone in minecraft And so I learned a lot about logic gates. I learned a lot about the difference The different items in that make up a computer the way that computers do additions subtraction multiplication and division and Through that process. I really found my my passion for understanding the internals of a computer for understanding the internals of software um And it really it drove me to go and look for a spot within The online community that I could actually go and contribute to um Essentially I I was looking around and I found this thing called ubuntu Now for me at the beginning it was it was difficult to pronounce and of course I was I was kind of skeptical um, this brings me about To about 13 years old I was at the point where I wanted to join this community. I'd read the packaging guide Um, I didn't make much sense of the packaging guide at the time But I looked at those diagrams and I was I looked at what free software actually means And it inspired me I was You know, I wanted to to contribute to this this this community The idea that anyone anyone whether they are are Somewhere in a corner of the world, you know compared to a large corporation Anyone can contribute to this thing if they have the skills and That idea was incredibly inspiring for me. So at 13 I joined ubuntu Um, I hopped in the irc channels. I was using kiwi irc way back in the day when they're when we had just a bunch of Of web-based irc clients and I just joined the the ubuntu irc channel Now the my my first computer that I had um on my own it was I think my mom spent a hundred dollars for it on facebook and So it was it was a lower-end piece of hardware and I found that with the ubuntu I could install that on there and I could still get all of my work done I could still customize what I wanted to customize The things that windows didn't give me and I was able to actually look at the code behind each of these each of these components and So I joined the irc channel The the first interactions I've had were Does this thing have an age limit? Is this something that you know, you need to be a certain age to be able to do and the answer I got was no um, so I I found I found a mentor within that community his name is walter lepchinsky and He really he took me He trained me on on some of the basics of this is how you work with the linux terminal This is what you're looking for in certain configuration files He really worked with me and and got me to that point and I'm eternally grateful for that After contributing to ubuntu for three months Now usually with ubuntu membership. It is significant and sustained contributions for a period of Six months is what we usually say I remember reading the the You know that it said three to six months and I said all right. I'm just going to give this a try We're going to see if I can get it at three months I'm just going to just going to see what happens and I actually got ubuntu membership after three months um Looking back on that a couple of years later When I did become a member of the the membership board they were skeptical They wanted to there were some there was some dissent on that they didn't want to You know just give me the membership, but they also saw the passion and the drive that I had and I'm thankful for them for For giving me the chance to do that um So at 13 I became of course an ubuntu member. No, I was 14 at the time um For some context, I know that earlier jason said he was born in in 2000 I was born six months to the day after 9 11 So that puts me march 2002 um So this was about So I was 14. What was that? 2015 2016 um one of the first ubuntu dailies I installed there was the option between upstart and system d so I was I've been really raised on system d as a as a component anyway um At about 14 or 15 I I really wanted to do that development work The reason I joined the community in the first place is because I read that packaging guide I I looked at those diagrams and I was inspired by the idea of free software And so I wanted to to learn what I could to be able to to get into that development mindset and The first thing that I ended up doing was bug triage So I would go through and and triage a lot of these bugs and really understand how a lot of this software worked and it's Some of the software worked really really well within the ubuntu community or within the ubuntu archive some of it really didn't um, and I through those efforts, I really found out how a lot of it worked and I transformed that into Into the the knowledge that I have about packaging etc. And it took a lot of work because the packaging guide really wasn't Wasn't up to wasn't up to snuff really um I What's that? Um No worries. No worries. Um so at at 14 I did attend my first linux conference. I went to linux fest northwest um I I wanted to see what this community was about. I wanted to go and and learn from these people in person so you know, I applied for funds through the community donations fund and was accepted and Really What I found there at that linux conference is that I felt at home I felt like these these people have the common interests that I have um and once Once I found that out that really really inspired me to to continue with it a lot further So at at 15 I became um an ubuntu master of the universe now that title is is It really represents a lot of work people say. Oh, you're a master of the universe etc But it took countless hours of banging my head against the wall and failing over and over and over again until I finally had that success until I finally got to the point where I I knew a little bit about about packaging and it took a lot of of people saying Well, you don't know what you're doing Maybe you're a little bit too young for this You don't have the skills necessary and I just I looked past all of that um So yes at 15 I became an ubuntu master of the universe and I also around that time Lubuntu the community I originally joined within this space Um, they needed a release manager now walter walter was the release manager for lubuntu and it got to the point where um We had some conflicts that I can't really publicly talk about but We were in a situation where somebody needed needed to step up into that role. So I don't remember exactly how old I was. It was maybe 15 or 16. I became the lubuntu release manager Now this was This was a burden. This was I didn't I really didn't know what I was getting myself into Um, the best way to describe it is drinking from the fire hose um You really There's a lot that comes in for a specific flavor that you need to be able to deal with Okay, we're moving from upstart to system d. That's a great example. You need to be able to adjust your flavor for that um The move from pulse audio to pipe wire. That's another move that we had to do even recently um There's a lot of these components where I feel like Sometimes I feel like I was a little bit too young but at the same time I had that drive and I had that passion and I put the hours into it um So at at 16 I also became A debbie developer I became an ubuntu core developer and debbie developer and for me. This was just a continuation of all that work I wanted to refine my skills because at the end of the day I I want to be able to To tweak this offer. I have it really comes down to You know, if I have for example the tv or if I have you know, even the even the media display in my car my my main Purpose my main drive my main passion is really to be able to Understand that this is free software and that anyone can contribute to it anyone can make it better And So at at 16 I really you know, I continued attending linux conferences and I continued going along this path It's it's been interesting. I've had a couple of different hats within that's that that time I was a member of the ubuntu membership board I was a member of the the developer membership board in fact when I became a core developer for ubuntu um Essentially, it's I was on the developer membership board. I had to abstain from my own from my own thing It's At that point I really I had to decide what I wanted to do because If you're if you're 16 and you you're an ubuntu core developer and a debbie developer and a release manager you really have to Understand what is what is the next step here? What do I really want to do? um And that is that is really give back to the community give back to those people that are in a position that I once was in and that's what I really find a lot of passion doing these days it is is Looking at these people that are younger and coming up for example a great Great effort that's been done recently is the ubuntu unity project that wouldn't really be possible without rudra and Rudra, I believe Nathan is he maybe 13 or 14 at this point 14 yeah um It I really like to to give back to these types of communities um So through through my time and through my my experience. It's it's a bit of a On the one hand, I feel like I have a lot of experience in these in these Spaces and these efforts on the other hand. I feel like well I sit in front of you today at 22 years old um I You know there's there's part of me that does feel inexperienced But I I would like to share some of the some of the aspects I've kept in mind Just in those release management roles just in keeping You know in keeping these communications with other people and in building up my own skills and building up the community around me and doing that um, one of the biggest things is Instead of assuming malice assume ignorance now this this is a it's a complicated concept, right? It's If you if you're interacting with somebody online essentially the You can somebody can be somebody could come off as You know as abrasive they could be saying, you know, this is the truth. This is what I want to this is what I want to say Or that it could be technically right even um For example system D has all these advantages. Well, if you if you say that to an upstart developer, you might get kind of mad um The the main thing is is have some compassion have some some empathy Understand that that people Everyone in here has a story Everyone in here has gone through trials tribulations to get to the spot where they are at today and if we fail to remember that if we fail to recognize that if we treat each other with hostility instead of Building each other up It really puts a damper on things um and I'm I'm I'm happy to be a part of this this community So the other thing that I had Yeah, it was was just treat others with respect and it's knowing that I may be a developer. I may be somebody who's really interested in code But there's a lot of people that aren't interested in that There are some people that that want to do for example artwork Or documentation or all of these items and to become an Ubuntu member. There's no straightforward path for it You don't have to be a developer. You don't have to do one of these things. In fact If you hold a local community team if you or if you start one of those and maintain one of those That is definitely an avenue for Ubuntu membership there's There's a great deal of things you can do within the Ubuntu community and within the wider free software community All it takes is understanding that You will fail there are times where you want to do everything you can to make something work and it just doesn't work out At that point you can make a decision you can either decide to give up Say this isn't for me. I'm going to move on and do something else or You can simply just say This is this is something that I'd like to continue to pursue despite all these short shortcomings. I'd like to continue and Really make an impact and there have been a great deal of times where I've you know, I've said something Or or done something that of course being in the community at 14 15 16 years old I'll look back at some of these communications. I'll look back at the mailing list posts. I'll look back at the irc logs and It frustrates me to see that sometimes and I think that in the age of technology in the age of information Everybody has you know, if you're on social media or or not There's some history behind that and it's if you look at somebody's twitter posts for for example from eight years ago You're really missing a lot of context. You're really missing who that person is today versus who they were back then and I think if we focus on just You know bringing up these old elements. You're really assuming Malice and not not just not just ignorance um There's some people who don't have that fear of of coming up and speaking There are people who don't have that fear of continuing and doing things that make them uncomfortable and and really that's how you grow it's it's putting yourself in that spot of Okay, I may not know I may not know everything about this I may not know everything about this particular aspect. I may not know everything about for example Snaps or something along those lines, but you just dig into it and you're persistent and that's really the key to success You know, I've had people ask me over the years. How do you do it? How do you how do you manage this? How do you do it? And there's no straightforward answer besides that I'd like there to be some pathway some idea of This is how you start in a specific part of abunzu and end up becoming an abunzu member and actually making a difference in this community Because it shouldn't take the amount of persistence that I had to have It shouldn't take banging your head against the wall over and over and over To to get to that point where you're making a meaningful impact um, I think there is There is actual value in a meritocracy now Of course, duocracy is a little bit different um, if you if you look at a talk by merlin a member of the community council, he does go through um The elements of duocracy versus meritocracy. I'm I'm a very firm believer You know, I've I've been was raised on the idea that Everybody regardless of where you're from regardless of what you look like your gender etc You have a chance to make an impact in this community and Some people weren't raised on that some people weren't, you know, some people don't have those ideas and it's It's frustrating to see sometimes um I think that's what I have for for my talk here. Are there any questions Here could you could you please repeat it into the microphone? I apologize So your talk is something similar to one that I have been Boiling in my head for years. Yes So you mentioned the uh, the membership thing with ubuntu when I was trying to apply for membership with the open susa community Going back to the previous question I had with the previous talk about uh communities and The separate alcove coves and the lack of bridges one of those was When I applied if you weren't active on the mailing list They wouldn't even bother glancing at you for membership with the open susa community So, uh, it's really interesting to hear How things have not only changed with open susa, but also how things worked with the ubuntu community So thank you so very much. Of course There's a there's a common misconception too people say Oh launchpad karma if you if you have if you have all this launchpad karma You're definitely a ubuntu member, right? Yeah Yeah, that's not the case If you're if you're not active on the mailing list, well Here's here's the thing in in 2024. We're really going through a big Transition point as they as they mentioned We're looking to move a lot towards matrix. We're looking to to to move a lot towards discourse, etc And it's not because the mailing lists are bad. It's not because irc is bad. I I love irc Irc is my irc clients and if I could use that for forever, I would Um, I would love to have a matrix client or matrix plugin for and and before you say it I know we chat exists I I still use irc. Um so there are We should work to really bridge these gaps between the old classic style of mailing lists and What we have today and it's really it really comes down to bridging the gap between You know communities or people that are are younger and actually grew up with technology who want something like matrix Who want something like discourse and the people who have been there for forever and have that the the routine and the um The existing infrastructure around that Um, I think that's that's something we're working on. I call it reducing friction within the community Um, other people call it Defragmenting the Ubuntu community. Um, I think that in general if we reduce friction for people to come into the community It it benefits not only the community itself, but if it benefits the people who Are drinking from the fire hose quite literally There's a lot that you have to deal with that is in coming And to have to be able to delegate some of that off is just so amazing And this is something that i'm going to be talking about in my my lobuntu talk tomorrow Um, essentially we're at the point with lobuntu and I never thought I would be able to say this where we have the largest team out of all kind of flavors um It's I've seriously tried to look and see is there is there a bigger team? Is there a comparable team and I can't find one? um And it's it's not to say that It's me because I'm not lobuntu the ten people on the lobuntu team They as a whole comprise lobuntu. We have someone working on documentation. We have People working on different development tasks. We have qa people and it's Everyone plays a part in that if somebody wants to come into the lobuntu community I'm more than happy to to help usher them in any further questions Or do we not have much time left? Oh, okay So I have real quick an additional thing to Pass to membership You can just do community outreach like I have it and being autistic. That's not easy for me. So Just find something that works for you Thank you. I'll add one thing to that too Up here for the live stream I did make the puny riga that uh, I'm here. I'm at the community council, which is the largest Governing body in the community side of the buntu I've done this and that and and so anyway The reason i'm here the reason I have that whole ubuntu Journey the reason I do this every year and run the booth and get to spend time with all of you Is because 16 years ago Somebody asked me scale reached out said hey, there's an ubuntu california team. You're nearby You do events, right? Can you you want to run a booth? We said yes The reason i'm here the reason people recognize me, which is still still weird to me Is because somebody asked me to run a booth because I live nearby And we tried we had no it wasn't just me neil bussett Greg simonian joe yasemoto Joe smith uh later yasemoto on a piracy And someone asked us to run a booth because we live nearby and we did and nobody told us to stop So yeah, if you want to like if you're interested in doing these things right you're giving a talk We we have an open process every year for these talks If you were like, you know like what's it like to run a booth? Stop by the booth and hang around for 20 minutes and talk to people who come up That's we we welcome that from the community and a lot of people gotten their start as well as um Is this your first talk? I think my the last talk I gave was like five years ago. Yeah, so You know, he said well, let's go for it and I says I've been a public speaker for about 16 17 years now. Thanks to that's right I know exactly when you relax because We have such a wonderful friendly audience here I always told people scale is the best place the friendliest audience to try these things out see if you're cut for it And I know exactly when you stop reading your notes and just start talking So which is when all that Work of writing notes pays off That was a fantastic talk So thank you. Um Any like we have a question perfect. Could we have a little more time? Oh A comment Simon, thank you for doing the ubuntu weekly Oh the ubuntu weekly newsletter. Yes, very good So it's it's funny. I haven't actually um gone up to Yes, we haven't met yet. Um, but that's that's liz She she ran the ubuntu weekly newsletter for a good amount of time and there were a couple of How long did I run it for a couple months? Yeah, a few months. Um, I will Yes. Yes. Yes. I I will never forget the time where where liz asked for asked for my address And I'm like, well, why do you need my address? I want to send you stickers And a couple of weeks later, I got these stickers in the mail and I still have them So it's it's those little things really that That that help um help people help keep people in the community. Um I did run the ubuntu weekly newsletter for a few months and it taught me a lot about writing and it taught me that I love to write And then of course as a developer, it's it's it's something that I It's not really number one on on my resume, but at the same time it is a very meaningful experience And it's something that we we do always need more people Helping with the ubuntu weekly newsletter. Um, so if anyone is interested if you have some Writing ability or editorial ability? Anyone is welcome Um You can of course join the I believe it's the ubuntu news irc channel. Otherwise there are discourse posts So if you if you go to discourse dot ubuntu dot com, um, search for the ubuntu weekly newsletter and It is it's every week. Yes The other thing too that i'll i'll mention. Um, just so that I I make this clear is Ubuntu is what it is Really because of who we all are in the ubuntu community There are people Here who have contrasting opinions compared to some opinions that I have and it's not for me to really say a lot of those publicly but what I will say is that despite having contrasting opinions on um You know technical items Political items. I really we have a rule in the ubuntu. We don't keep we don't bring politics into it at all because it's It just evolves. It really just does devolve. Um We have different opinions All of us do and yet we still find a way to work together and that's really what's powerful about the ubuntu community and the free software community Does anyone have any further questions? Or comments I don't know jason counts you count What's that? Ah, sorry nathan. I uh Feel the need to make you exercise today, but um, what is it getting your steps in? Yes, get your steps in So I I definitely really like this talk because you know, I often find that a lot of people You know, at least you know, even when I was in university when I was like, oh, I'm going to work You know a canonical and you know develop open-source software and they were like, oh, you know How do you how do you get involved with that? And you know and as my my response to that was kind of like, you know It just happens, you know, sometimes you just fix a bug and next thing, you know, you're a core maintainer, but um, you know one thing that I really want to ask is like, you know If you were to take like say like, you know little simon today like little 13 year old simon who is, you know, eight months younger than me Um And you know tell him like, hey, you know, here's an opportunity for you to get involved and make meaningful contributions Um to ubuntu. What would you say because you know, at least I know when I first started, you know It was extremely daunting because it's like, you know, this isn't ubuntu is very well established community You know, it has a long history You know, it's used on like 40 million devices potentially even more So it's like how how do you kind of be able to be That bridge in for like a new contributor who wants to do something the biggest thing I would say is Join the community. Um, we have matrix channels Etc we have irc channels if you're an irc fan We have a discourse instance The first thing that you would do really to to make that impact Is to is to join one of those and really just get involved Um, sort of take the take the pulse of the community see where it's at Um, and find find something that really that makes you passionate So this is something that I I do within the libuntu team when somebody joins our channels When somebody joins our spaces, what you know, wherever it's wherever it ends up being I asked them a simple question. What do you like to do? And to them it's well, what do you what do you mean? Do you like to do development? Do you like to do bug triage? And if they're if they if they give me an undecisive answer, I'll give them a little taste of everything to help them discover what they like to do and it's That's something that You know part of me wishes that I could be that person for that 13 year old simon that that does That welcomes people into that community and And that's really what I'd say is join the community get involved get a pulse on things and then Just start by start by, you know, experimenting playing around and I've said it maybe three or four times now, but I'll say it again A lot of this involves failing over and over and over again It's that persistence of I'm not going to give up I'm not going to let it get the best of me and I'm going to just continue doing what I think is right and Of course, there are limits to that but If you're if you're looking at free software and wanting to contribute, that's that's the most impactful thing. I could say you could do All right, sweet. Thank you get those steps Just a little thing I'd like to add Ken van dyne involved with Ubuntu desktop for many years now Um Is you know, if you if you have an idea of something you would like to do within the community It doesn't have to be code. It could be documentation. It could be translations any sort of thing I would urge you take the leap Don't feel like oh I don't really know how to do this yet or I'm nobody because I'm not involved yet You never become involved unless you take that leap. So take that step. It's the most welcoming community I have ever seen in my life You will not be disappointed if you do it find a way to engage Join a matrix channel if you don't have an idea, but you want to be part of something, you know jump in and say hi And just listen lurk to see what people are talking about and find ways to engage But I would encourage you don't be afraid. Just do it That's that's the best way to get started is just do it. Nobody's going to tell you not to And I would even I would even add on to that point as well Everyone from myself to Nathan Haynes to ken van dyne to to jason Everyone started really at that point of I want to join this thing and I just I just need a place to start everyone had a start in the community and the only thing Really preventing you from becoming that you know these people that you you look up to is You need to have that confidence to take that first step and it's difficult It really is difficult sometimes Just yeah, like you said just like no one told me to stop running the booth and you've been stuck with me ever since Yeah, you asked me 12 years ago. I think it's been now About that and and it's been it's been fantastic. It's made my life a lot easier Any additional questions? So after this is jason has one I'm still not sure he counts Sorry about that. Um, yeah, so I guess like one last question that I have for you, Simon It's kind of going, you know back to the original title or the title of your talk You know open source is not just code and you know, I think that's really, you know A great mindset to have because you know a lot of time you might you know hop on github Look at some random project that's like recommended and you see all this stuff going on like all these massive technical Contributions and you're just kind of like, oh, I don't even know what to do You know it's basically me whenever I look at like one of a very mature rust project You know, I'm a python programmer by training. Um, but do a little go on the side And so I guess one thing that I really have for you and kind of circling around like new contributors and taking that leap How do you think, you know, we can kind of you know as a community as you know Maybe say an established member like yourself. How do you think we can you know go out to these individuals and? Encourage them to join like how can we kind of be that bridge be that support? That makes them feel comfortable to take that leap so that you know when they do try and fail they're not discouraged and give up I'll do two mics. No I think it's I'll tell you how I how I do this and and I'm not sure that I have a wider answer But hopefully this this this answer will Inspire some feedback perhaps um I've learned to identify when somebody is a newer contributor When they have that interest when they want to jump into something it's it's pretty easy to identify and It's it's up to the existing people within that community to really say Okay, I'd like to I'd like to help you Um, and that's that's a really big thing that that you have now as as the as the release manager of laboon 2 I I take on the tasks that You know for the other nine people they don't really want to take them on um one of those things is I put a very high value on if a new if somebody new comes in Give them the time of day really It it may seem silly like okay if you have 500 people joining your channels all wanting to contribute Well, it's actually a good problem to have not a bad one. Um, you should have people within your within your team that Are focused on that that are focused on that community management aspect and I look up to the community council And I look up to the work that um that the canonical community team does as well um simply because they are they're driving a lot of that effort and I think for for anyone running their own community or anyone that even wants to get involved in a boon to It's it's finding that person That Will will help you take that next step for me that was walter now walter's a busy guy. I can't he can't be that guy for everyone, but it's finding that person who can really mentor you and Providing that membership from the project side. That's what I would say Okay, so we have a little break and coming up at uh at noon We have ken van dyne and his talk you will not want to miss so stretch your legs Water if you need it, but make sure to come back here by noon. It's going to be amazing And thank you Simon And uh, also before you head out if you'd like to grab any merch here if you're like, you know going out for break Or planning on coming back later. Um, we do have some ubuntu stuff up here So be on feel free to stop by and pick some up Welcome back for the break It's time for our next talk ken van dyne is a rock star And Because i'm impatient. I will just say the name of this talk is ubuntu core desktop Imutable secure reliable And let him get to it ken Hello everybody. Um, hope you're enjoying your time at scale so far and definitely ubukan. I know I am It's been a lot of it's been a long time since I've been here. I think my last Scale was 2007 So before ubukan started so I'm excited to be here. Uh, so I want to talk a little bit about ubuntu core desktop I'll apologize for anybody who's been to the ubuntu summit or watched the live stream Part of this a repeat of what I did at the ubuntu summit in riga But I have actually added a bunch of content So a few of the slides i'll kind of skimpast a little bit because you can watch that on the internet at your leisure And this audience probably knows a fair bit about ubuntu already um So who uses ubuntu? Um, you can probably guess hopefully most of you use ubuntu But gamers use ubuntu scientists makers artists educators It professionals is a wide array of types of people who use ubuntu, which is fantastic Where do they use it? Ubuntu desktop is used all over the places in manufacturing and government offices You can find it in pretty much any sort of environment um, of course ubuntu You know our our users tend to value certain things things like privacy and performance security People are more aware of these sorts of things are typical users You know choice, of course These are some of the things we feel our users really value And of course, this is all we're here to really talk about is ubuntu core desktop. So, um, you know But before we really dive into what core desktop really is Let's talk a little bit about what ubuntu core is I'm sure many of you have heard of ubuntu core It's usually thought of as something that's used for iot devices and embedded environments and things like that And yes until now that's primarily the types of places where ubuntu core has been used It's fully containerized ubuntu optimized for size performance and security Basically, if you need something that runs in a small space Going to be robust reliable Guaranteed to be up all the time. It's not going to break when it updates in the field those sorts of things You really want to be running ubuntu core You know some of the primary benefits of ubuntu core Of course security and you know maintenance, you know, the software stack has got the longest support window out there Um Fully containerized separation between your applications and your kernel and everything they're all installed Essentially they're all in their own little container every app gets installed in its own container Over there updates utilize deltas and rollbacks. So it only downloads the bits that change not the entire package every time it has to Download rollbacks If an update fails it rolls back to the previous known good And it works in air cap environments as well And of course the same kernel and libraries For your production and your development environment. So you can you can be sure your application is going to work in the production environment And we can't talk about ubuntu core without really talking about snaps, right? I'm sure everybody's heard of snaps Um some controversy sometimes, you know, everybody loves that But the the primary gist of what a snap is is a fully confined package That contains everything it needs to run. Okay, so Regardless of what environment you're running the snap in like say firefox Let's use that as an example firefox in ubuntu is a snap If you test that firefox version on say an ubuntu 22 of 4 device You are sure to get the same experience out of that version of firefox that exact revision out of the snap store On say an ubuntu 1804 device You don't need to test your applications on all the various host operating system versions that you may need that to run on You know that it's going to run because snap d provides that level of support Um provides a lot of security built in like access to system resources your camera You know, it doesn't just have unfeathered access to all of these things your microphone You can you can turn those things off if you want You can prevent it having access to that You know this application that you just installed on the internet can't go scavenge things Out of your address book find all your contacts and email them all they don't have access to that information You have to explicitly tell it that you will allow access when it needs it And of course, you know the package maintainers or the upstream software vendors things like that You know, they can keep very close eyes on when the snap needs to be published again To ensure you're getting all the cve fixes because maybe your app depends on open ssl and there's a exploit in open ssl The snap publisher can just rebuild their snap and automatically push out updates to the user Users and the version of ssl that's bundled in your snap. It's now patched. You don't have to worry about that sort of thing And of course with the over there updates The snap versions can move forward and backwards Reliably so if you update to a new version and for some reason it doesn't do something that you needed to do You can roll back to the previous version You can install older versions and ensure that everything that is needed for that version to run still works That's not always the case in classic Ubuntu where if you go download this dev of an older version of this application Maybe a dependency that it needed has changed And those things aren't all self contained. So you're you're subject to some instability there Or maybe it won't behave the way you would like Some of these snaps you can actually control what channels things come from so you could have like Use lxd as an example lxd publishes their versions in What's what we call tracks? So if you want to track say the You know 4.0 version of it even though latest stables in the 5.0 series You can just pin your device to only track updates from the 4.0 track And as new Fixes for that stable release series goes out you automatically get updates for that version and you'll stay on 4.0 You know you have that that level of control and you can pick and choose Error handling and automatic recovery Snaps can it has this built-in mechanism for health checks So as the publisher of this snap you can write some various scripts and things that get bundled in your snap That says if these things don't return true, it's considered a failure So if it fails a health check that snap update will automatically roll back to the previous revision So as a user your system is still working You know so you can ensure that everything is the way you expect it to be And of course delta updates it only downloads the bits necessary not the entire package, which is also great especially in like environments where you may have acts Like a 4g modem built into an iot device that sort of thing is very important So now on to ubuntu core desktop This is our solution for a fully immutable secure and modular desktop operating system So the idea here is we want to build on that same rock solid Ubuntu core experience that we've spent the last decade Creating and we want to leverage that now in a desktop space where you may have The need for a really reliable secure robust environment like say in your enterprise with 10 000 machines and you know, maybe a call center or something Where these machines have to be up and they have to always be working And a very managed environment. This is ideal And as I mentioned we're working on a bunch of core now for over 10 years Primarily targeting the iot space We feel this is a great opportunity now to start building a desktop experience on top of that Initially, you know, we're looking at like thin clients and enterprise type users Really being very interested in this sort of thing And probably some student users as well. You can very much think of this as like a chromebook type experience It's not quite as limiting as a chromebook You can actually install pretty much any ubuntu software on it as long as it's available as a snap But they run as native applications. It's not just web You know web type applications, but if you need an environment that's really reliable and secure you'll get that So, you know, I talked a little bit about kind of what we're targeting now But this is the progression of what the spectrum of devices that ubuntu course targeted So, you know, we've kind of conquered the iot space. We've been in kiosks for quite a while now things like digital signage You know those fancy digital ordering boards like in fast food restaurants and things like that You know those types of things are you know are sometimes running ubuntu core Single-purpose type devices, which is very much similar to the kiosk type experience But it doesn't have to be just like a browser. It could be a device It just needs to be able to run this one application for managing a 3d printer or something, right? Or in manufacturing type environments where it needs to be the management node that manages this thing in a plant floor Those sorts of things are very commonly found with ubuntu core So now kind of the next step is going after like say thin clients Broad enterprise type deployments, which I'll talk a bit about in kind of the second half of this talk is Kind of where we see this going We're not quite ready yet for the daily use although I will say I'm using it daily I am actually presenting this from ubuntu core So I do use this for most of my every day needs I will say I use a think pad for a fair bit of my development work that I do but when I'm on You know video calls all day and things like that with colleagues and customers and partners and things I'm usually on my ubuntu core desktop machine And after we get to the point where we feel like it's really ready for daily use Then we'll start tweaking the experience to make it more developer friendly We already support things like container workflows So you can actually install lxd on it create containers and inside of a container You can do anything you want you could create a fedora fedora container and do some rpm packaging if you want that totally works So if you need to do those sorts of things We're still going to polish that experience up a little bit to be a little bit better integrated with some like Popular ide's and things like that for this container based development workflow, but we're working towards that So a little bit about how it's built. So this is going to be a fair bit different than classic ubuntu is what we're calling it these days But you know since this is built on top of core the concept behind core is a little bit different So everything is a snap You know, we have the kernel snap at the bottom Uh, we have this thing called a gadget snap, which I'll talk a little bit more about later, but this is effectively Providing some metadata about how the operating system image is going to look Snap d itself is a snap and this is what manages all the snaps on the system and the resources and things like that Then we have something called a boot base and what this boot base is is effectively a minimal Root file system that the operating system needs And this minimal root file system is the thing that every snap has access to on the system Read only of course, but it can execute if you like, you know commands that are Expected to be found in a root file system those sorts of things And this is one of the very interesting parts as we actually have the ubuntu desktop session as a snap So we're running the entire genome desktop environment inside of a fully confined snap Where it does not have unfeathered access to everything on your device You could actually block access to say the camera. So good. Ohm couldn't access your camera if you wanted to I don't know why you would actually want to but you could do that But it is running inside of its own container An application not behaving well will not affect your desktop experience And then this concept of additional bases so When a snap is built It has to declare what base is designed to work with and our bases are we call them core snaps. So right now Core 22 is based on ubuntu 2204. We will soon have a core 24 based on 2404 Say you have a snap in the store that the you know publisher They're publishing new versions of their software, but they don't want to rebase it Because maybe they depend on an old glibc from 1804 or something They could define core 18 as their base So that's fine. Even though your system is running say 2404 That application is going to behave the same way that software developer wanted it to behave Because it will have access to that version of glibc in core 18 Because when that snap gets installed on the system, it will install core 18 as well So we have the addition additional bases there so we can provide Enough necessary components to run that software And it doesn't affect rest of your rest of your system So you can have two glibc's there and it's fine And then of course applications. So, you know any sort of snap that you want to install you can install on it your firefox chromium Steam, you know, if you want to do some gaming that those sorts of things And a little bit about some of these so like for example Some use cases is it's really nice having the kernel as a snap Imagine a day where we may have say a gaming optimized kernel that has some things Tweak might not be as reliable for like a large scale enterprise type deployment But it's definitely something you want for your your particular workflow Or maybe you need a real-time kernel for some reason or those sorts of things FIPS compliant something like that You could actually switch channels For which version of the kernel you want to track Reliably without worrying about having You know an unreli an environment that doesn't work on your system. You can just replace the kernel and you're good to go And back to what I mentioned there with the desktop session Yeah, I guess I'm going into more detail on this slide But it's the same sort of thing the idea is a flavor could you know, like lubuntu could have their own Desktop session snap that provides their experience On top of all these other components So as a flavor you don't need to replace all these other components You just drop in a snap that provides your desktop session There could be a kde one Mate those sorts things or maybe you want a variation of the ubuntu desktop session That is gnomed with most of the the ubuntu experience, but maybe you don't want the ubuntu doc You could create your own snap that inherits all of our stuff and just overrides a few Extensions that get loaded that sort of thing And just drop that into this the system and you're good to go Oh and imagine a day where perhaps in our development series like right now we're working on 2404 If we had a develop track of that ubuntu desktop session where you have Your entire operating system is super stable. The kernel is reliable It's going to support your hardware all those sorts of things But you just want that latest version of gnomed and maybe we're not doing it yet But maybe we're building daily builds a gnomed into an edge channel of a develop track You could switch your desktop session to use develop edge And live on that bleeding edge and get the newest version of gnomed whether it's stable or not Without tainting the rest of your system without, you know, relying on an unreliable kernel those sorts of things That's totally possible I'm hoping someday we will do something like that and then we'll effectively have a rolling release Okay Um, the the the kernel snap it should just be replaced with so you can do um Yeah, uh, yeah, so he was asking how reliable it is like if you forget to switch your kernel, right? What happens? Well, I mean it's possible that your kernel doesn't actually support your hardware But you would know that right away and actually you could do a snap rollback And or snap revert and that revert will roll Revert back to the previous known good one. So when you switch to that kernel if it doesn't support all your hardware you will know very quickly and In most most cases it would probably fail actually to switch And automatically revert rollback But as a user if you notice a snap that you just updated or maybe you changed tracks tracks like in the kernel case If you notice it's not doing what you want There's a revert command that will revert that last change Back to the known good. So you can very easily get back even if your system's unstable. You can very easily get back Um, um, you could And it does keep multiple versions around So you could easily have multiple kernels installed and very rapidly switch between them and just do a reboot Um, if you want so if you wanted to keep the kernel the gaming one around and your everyday one around You could actually make that change with just with a quick reboot as well There's there's a thing built into grub on ubuntu core for dealing with that sort of scenario I'm not actually I've never actually dealt with it But there is an option and if you bring up the grub menu in ubuntu core to do something like that Um, so what does this what makes this so exciting? Um, so, you know, we all love Lennox and open source and ubuntu of course Um, and you know sometimes it doesn't always do exactly what we want Sometimes we install somebody's software and it doesn't behave well Um, and often we enjoy the tinkering part of it and making it work But there's many there's oftentimes environments where that's not desirable right stability is key Um, keeping you just need to use this computer to get work done Um, so that's what makes this very exciting is this can be You know classic Ubuntu is not going away, right? This is an an additional offering that will provide this level of reliability is necessary for ubuntu in many environments So things like you know system files just can't be randomly altered like you can't Open up terminal and sudo rm Some important piece of that root file system because in fact that root file system is read only And it's actually the file that you want to delete is not actually a single file It is actually contained inside of a squash fs file that is happens to be mounted In that location. So you can't you can't taint those kinds of things accidentally Atomic updates that we talked about a little bit there, you know, I mean The fact that an update could potentially break something the health checks and that automatic rollback is huge And of course, you know running each application in its own confined environment You can trust that spotify is going to work the way spotify is supposed to work Because it's been tested. We know it's going to work You can install it on a ubuntu core desktop even though we've only tested it on you know classic ubuntu You'll know that snap's going to work We support things and now classic ubuntu is getting this as well, which is exciting But like for example tpm backed full disk encryption Um Secure boot things like that, you know all stuff that ubuntu core has actually supported for many years now Our full our tpm backed full disk encryption Story and classic ubuntu has actually built all off of the ubuntu core full disk encryption Story which has been around for several years now. It's well proven already You can customize various elements like I talked about a little bit having maybe a different desktop session things like that You could easily like decide. Oh, I want to switch to kde today You could easily make that switch you could install that other snap and you could have a kt environment and genome environment Installed without worrying about all those dependencies because that kde environment will have all of its own dependencies built in And it will not conflict with the versions of you know glib or whatever that you need in that genome environment Um, and of course you can uh in in creating these desktop sessions You could actually have something that's very locked down or trimmed down. Maybe you disable access to many things Maybe you don't need the full doc and app spreading genome You just need the ability for that user on this thin client workstation to be able to launch two apps You could provide drop in a desktop session that only allows access to those two apps In a very controlled type environment Manageability one thing that's very key for this type of deployment like in a large enterprise Is knowing that your systems are all identical, right? Like you you need to know what systems are potentially exposed to watch what potential exploit those sorts of things You need reporting those kinds of things So in this case you can trust that all of these systems are exactly the same There isn't some random file that somebody's downloaded off the internet They have sudo access on their system. They drop in a replacement to user bin ls That that can't happen in this sort of environment, right? And of course automatic and atomic updates the systems generally automatically update You can't actually turn off that automatic refresh these days But oftentimes you do want that automatic update But an enterprise type environment you can actually gate when those updates go out to devices But the devices can automatically update every four hours And in that enterprise you say, okay, I want to turn on this update for this package now Those devices in the next few hours will all get them Snapped applications are only only have access to things that they're permitted to so you can really control those kinds of things And in an enterprise type environment you could actually control like If you want to disable everybody's camera You could remotely disable all the cameras on all those of the two core devices Inside of your enterprise if you want So I've mentioned enterprise a fair bit already I'm going to dive in a little bit more detail on how we would envision An enterprise really rolling out something based on core desktop in a large scale environment So you know back to this diagram here that shows all the various components The piece it was on the other slides, but I didn't talk about it yet the model assertion This is effectively a map of all the components that go into building that operating system image Okay, um, it's really just a json file. I'll show you an example of that here in a bit But it's got some metadata about how you want to kind of construct this thing And this is where you define the various snaps that need to be included So here's a little snippet of that json that I talked about Not going to really talk about most of the details there, but there's a few interesting interesting things here One is The grade is listed as signed You could also do dangerous which means not everything is necessarily has to be cryptographically signed in order to run But in this case if you use signed you do have to have a signed model assertion for the system to even boot You can specify the storage safety here You notice in my example I I specified prefer encrypted So in this sort of environment when this machine when this this gets installed on the machine if tpm If a tpm is available to it It will automatically use that tpm and automatically Use full disk encryption You can also require it as well So it'll only it'll only install itself on machines that it can use the tpm for Or you can disable encryption completely if you would like And you specify what that base is here in this case is core 22 desktop And there's a stanza there which I have dot dot dot in this is where you list all the various snaps that go into it And yes, I will cover that in a minute. Yeah, I'll Yeah, I'll cover that. Yeah So here's a little excerpt. This isn't isn't a complete list of what I have in in mine here But this is how you define in that json file which snaps to include So in this case, you know, I'm going to get into the gadget here in a few minutes But that first one is a gadget snap that may be may be named ubukan pc desktop Type is gadget and this has some metadata about how things are built Which kernel to use so in this case, I'm choosing the kernel from the 2310 stable channel Which desktop snap have been to desktop session from latest stable Which version of firefox to include and you can you could add any number of snaps that you may want to have there But you define all of these in the json file And then you sign that json file using a snap utility for signing it which creates a Signed yaml file which can be trusted Now we'll talk a little bit about the gadget So this is some a bit too core terminology that I think a lot of people aren't familiar with But effectively what the gadget does is defines various information about what the operating system image needs to look like So for example, the storage layout what the file systems may be necessary Maybe you need to create three or four File systems for some particular workload or something like that you can define that in the gadget You can also define things like and I'll think the next slide actually covers it This is a little snippet of what it looks like. It's a yaml format But if you see the section there where it says defaults This is where you set various settings That need to be honored throughout your device So we have a couple experimental things enabled here because you know some of these things are still experimental in core desktop But you see the refresh retain value there of two That tells the system to keep two copies of every snap installed You can set that to three four five and it'll keep that many just it uses more disk space But if it keeps multiples around you can very quickly and easily switch between them without having to download New revisions and stuff like that Oftentimes it's just a quick restart of the application or a reboot in the case of a couple special snaps I'm going to talk a little bit more about landscape in a few minutes But landscape is a fleet management, you know solution that can manage large-scale deployments of these machines This is an example of an Ubuntu core device being able to auto enroll in landscape So you can imagine a fleet of you know, 10 000 machines inside of the enterprise They all get booted up for the first time when they get booted up instead of having to go and physically sit at the machine And put in things like the url to how to find the landscape server Or a registration key to allow it to register with that those sorts of things You know, you would traditionally have to do that sort of thing manually with this Ubuntu core type solution The machines will actually auto enroll in that landscape instance So they will just start as they boot up. They'll start being populated in landscape so you can manage those machines remotely So those are really just settings on the snap that's provided In this case landscape client is a snap which provides an agent on the device to allow it to manage it And then these are snap settings that get set by the snap automatically when it gets installed on the system And then we have a it's a relatively new tool called Ubuntu image This is our kind of next generation way of building Ubuntu images I'm looking forward to the day where we can use this to build our classic isos Because i'm sure if simon you're well aware of how much fun that is Ubuntu image is much more modern It takes a declarative type input and outputs a bootable operating system image So in this case it takes as input that signed model assertion that I talked about And it outputs an operating system image that can be booted up on the system Maybe it auto installs itself or maybe it provides you a guided install Any number of ways that it can actually be installed on the system, but it will create that image Ubuntu images have been used now for a long time for building Ubuntu core images It supports it well supports it well for core desktop We're not quite there yet to use Ubuntu image for classic, but we are getting close so i'm looking forward to that But that's how we actually produce an asset that can be used in the field to actually install it on somebody's system Um now this is where I mentioned landscape a few minutes ago Landscape is just for large-scale systems management, right if you have a fleet of machines And you need to be able to create reports on them You know, you need to know what what what your exposure is to some particular bug like you know You know firefox version x or whatever has some sort of a Cve against it you need to know Which devices in your organization has that version of firefox? You know you have a lot of control over that automatically by you're being based on a bunch of core and the snaps and things automatically update and things like that But in landscape you can actually you know create reports You can actually get some visibility on exactly what's installed on on your systems And you can actually control that too. You can maybe you can say I want to update these You know 100 machines to this new version of something and see how it goes first You could actually control that sort of thing You know pushing out rollout security fixes things like that This landscape has been used for a long time already on classic Ubuntu So all of these things were available with existing classic Ubuntu. It's recently gained support for Ubuntu core But we see you know an enterprise type scenario with core desktop, you know landscape being a critical component there And you can automate any sort of thing you can create you know automated monitoring graphs all kinds of things For the devices in the field So you can manage you know software on the devices Services that may be running so maybe you need to find out if the devices You know in your enterprise have this particular service running or not because at the time that you built it You weren't really thinking about it But now you once you once you realize this could potentially be an issue You could scan the devices in your enterprise and find which devices may be running that service And you can remotely disable it if you if you need to Or enable it even You can remotely manage the user accounts on the system. So Like if you need to you know create a an account for some new software or something that's going to be installed on the system or Maybe provision, you know a machine that's now being reused for this that new person You could create a new local account Or you could even manage things like active directory type integration You know that allowing access to enroll in a domain things like that Of course viewing the device state what state the machines are in Which ones are up now? What are their current uptime? Resource utilizations, you know, are the machines running low on memory? You can see all those sorts of things And of course access the logs, you know being able to capture Logs to debug problems on a particular machine. Maybe a user is having trouble and they've called the help desk and You submitted a ticket you could remotely pull system logs from it via landscape That's it questions. I'm sure there's plenty Simon I have one question for you. So All these are really great ideas. I love them and The the one piece of constructive criticism I keep hearing or criticism regardless whether it's constructive or not over and over is The the snap store is proprietary and so is landscape. What do you do to speak to those people who who have those concerns? What would you say to them? Well, yes, so The snap store the back into the store itself. Yes, it is proprietary everything that runs On the systems that actually get deployed in the field is all open source Um, I can't speak to whether or not any of that may change in the future Um, but yes, the back end is proprietary that question has been asked on reddit like a million times Um, I am not necessarily qualified to answer that but I you know, we'll assure you everything that actually gets deployed on anybody's device Is 100% open source? Um, and we well documented the api to the store. It would be easy for Well easy it could be completely replicated if somebody wanted to create a store that functioned the exact same way And we actually did see that happen and i'm very glad that it did on the click store Which was the predecessor to the snap store There's an open store that got taken over by ubi ports That basically was developed to that spec that we had for our store And that is still running today for the ubi ports project Um, so it's been done before somebody has created alternative store, you know, it could happen for the snap store as well Uh, yeah earlier and um, you may have actually explained this and I just didn't didn't quite hear it But um earlier you mentioned, um, I think it was on the manageability slide About how user with sudo can't just like download a random version of ls that does something totally differently I was just curious what the enforcement mechanism for that specifically was is that like Is that through snap or is that through app armor or So like how does that get enforced that it's gonna only run that? So well snaps do heavily rely on app armor, but in that particular case Like if you're trying to uh drop in a user bin ls right in that directory path That file systems read only You can't put a file there and replace the existing one because it's read only Um, so all of those sorts of things are read only. Um, it's completely immutable Um, if you did for some, you know Somehow drop something in a location where something could try to execute something actually the snap would fail to launch Um, because the signature doesn't match anymore. So um a snap itself is actually a single squash fs file Which is basically in a single file that can be mounted Um in a location and it like provides a file system But it's a single file and if the signature ever changes on that squash fs So at runtime it mounts that squash fs file into something that can be executed The files inside it could be executed snap snap won't even launch the snap if the signature doesn't match So like if you try to um unsquash fs Firefox tweak something resquash it and you know Coors your system somehow to try to launch it it would still fail to launch because the signature of that squash fs wouldn't match Um, so you can really trust that those bits are what they say they are any other questions If I have a software package that's only currently available as a deb for example What would need to happen to run that on ubuntu core? So in in the current scenario the your best bet would be to create a container and run it inside the container So you could create an lxt container of say 2204 Install the deb there and launch it and it will work just fine And in that case it's unconfined but inside that container. So everything inside that lxt container is kind of its own You know segregated from rest of the system that way you could do that The better alternative would be to repack that deb as a snap And contribute to the ecosystem What about um software that say has kernel modules and other stuff like uh invidia drivers Yes, so um, so actually right now that pc kernel Snap does include The invidia drivers However, that's uh something that's being split out and you'll be able to install um Those sorts of things a separate snap so like the invidia driver could be a separate snap and you can be assured that That version of that snap matches this kernel and they will be updated in lockstep so, um, that's The terminology has changed several times. I think it's currently called um Components snap components is the terminology for that And that's coming very soon But then you could have these sorts of drivers packaged as snaps and they will match So things like dkms though, for example are not going to work Which is where it would compile the module source at basically runtime, right? So But in that case you're actually tainting your system when you use dkms because any random thing could be Very untrusted and now it's in your kernel space, right? So those sorts of things aren't going to work in this sort of environment, but I would argue they probably shouldn't Let's get them properly snapped in a trusted type of environment published vetted And then manageable so you don't suddenly have a kernel that doesn't support this anymore Heather over here has a question. Oh you got one back there two questions. Uh real quick You've got the budget quarter. What do you expect to be the default applications? And how do you expect that process to work and two if I downloaded today what's broken? What's broken Um, okay, so if you I'll start with that question if you download today, what's broken I would say not much. I mean things like, you know, um, I use my bluetooth headset daily with it Those sorts of things work You know, it will boot it will run most things that you want it to run Um, you know, there is some quirks here and there like, you know The telegram snap annoyingly the app indicator doesn't pop up and I haven't really Debugged that yet little things like that that you would expect to work, you know May not but for the most part it does work. Like I said, I use it, you know Probably two-thirds of my day every day and I have for like a year Um, it's pretty reliable There is some scenarios where something that you want You may not work because it is running in a fully confined type of environment and Not all apps were designed to play that way But it does work and what was the other half of your question? I'm sorry Default applications So this is where every deployment will be a little bit different But the ones that I've defined in our kind of our current reference build are Firefox for the browser Genome text editor We have the you know cup snaps, which I think is I guess more infrastructure related stuff network manager as a snap Genome logs so you can get you know log information easily without actually going to a terminal You don't get quite the same terminal experience because you don't have a You know same level of access to your system, but we do have a Temporary kind of solution called console if you are in total insult today That does give you some access But having something like genome logs is handy Genome system monitor is included There are a few other random things like genome weather a couple of genome apps like that Not a lot. It's pretty light Yeah, I'll summarize so the first one is question about debug symbols and snaps Okay, that's a more complicated question, but I'll try to speak to that Callahan may know more than me But anyway the second half of that though I mean we haven't really completely defined what like a community release is going to look like for the Default apps are I would imagine it probably would be beneficial for us to do something similar to what we do with classic where we have a minimal and Well, I guess full we switched it because default used to be the full But anyway, so we may do something like that where we have two builds that include those things But also, you know app center is included you can easily install library office if you want it You know so the question is is there a lot of value including it or not It's pretty trivial for us to add it to have a you know It's just the json file we could drop in we could have two different images built So we haven't really talked about that yet I mean it's still kind of a reference implementation, but that would that would be easy to do The debug symbols question is a very good one Mozilla's done their own thing with firefox, which is working out pretty well But a lot of it does rely on their infrastructure We've got some work happening now. Do you want to speak to this call hand or Okay, oh, he doesn't have the microphone though Try to be loud. I guess he's got it Okay, I'll I'll do my best Okay, I'll try to summarize that so really the solution for that is very similar to what I talked about with the kernel drivers being components It's a new New concept in snaps is where you can build these components So as a as a developer you can create the debug symbols and they'll be repackaged in the they'll be packaged in the component portion Of that and then to actually utilize those you just download those components Um, what about a debug symbol server sort of solution? Are we going to use debug info d or whatever that's called? Yeah, yeah Yeah, the first the first bit of that is actually getting the debug symbols into the snap So, um, yeah, that's been a long road to get there for the debug symbols, but I'm glad we're we finally have a path there so More questions We got a couple back here. I'll try to um, well, I mean So Yeah, okay. What's the difference between The benefits of a bunch of core versus some of the other immutable kind of offerings Um, yeah, I would say the modularity is probably the the biggest thing and I mean and also really leveraging all Snaps been around for a really long time And there's so many benefits built in to just you know using your applications the snaps Your operating system effectively leveraging all those sorts of things There's a lot of benefits there that those other immutable systems do not offer But I also say the modularity I think is very compelling because you know, generally with those some of those other immutable offerings are there is a Giant, you know, like say rpm os tree that is one big chunk and if you need to say install the nvidia drivers you have to replace that Big piece with another big piece that has the nvidia drivers installed Or say you need to work run some particular Container workflow or something like that that actually needs to be part of that You know in our case you could just install lxd or docker as a snap And it just works you don't have to replace the root operating system effectively To make that work you could just drop in the component that you want and your workloads will work So we have that modularity that the others really don't You know, we've been working on this for a really long time and snaps have been around for You know a very long time now longer than flat pack We've built up a lot of technology here over the years We started with you know focusing a lot of our efforts on iot and those sorts of things But in doing so we've really built up kind of a portfolio of just great technologies And it's all coming together now to really Provide enough to run a desktop on So it's taken us a while to get there, but I think you know the ability to have all those sorts of components and the way they interact together I think is really the differentiator I think you had a question too Oh same, okay Anymore, yeah Okay, so yes, um, I will say Well, I guess it's not important now we're done Um, the the real limiting factor is so I I mentioned for example that our Genome desktop session is actually running inside of a strictly confined snap itself, right? So something like say a terminal that you launched inside of that unless we had a fully snapped terminal That had all kinds of special privileges But theoretically if you launched a terminal inside that it would actually be running As if it's inside that same sandbox that the genome desktop session is running in So it wouldn't have access to anything else on the system besides the things that that genome session is using So the the scope of the terminal will be, you know predicted based on the environment that it's running in so there's A few different ways we could solve this like for example, we could Snap some popular terminal applications and create some snapd interfaces that allow some special access To make it behave more like a classic system Um, I mean we could certainly do that. I don't know if we will I think the more compelling thing is to really harness containers And if you need to run scripts and do things for your work that requires those sorts of things You just open up and we have a very nice App called it's still experimental but it's called workshops, which is a terminal experience built on top of lxd Um where you could say, you know with a click of a button you have a fedora container up and running You have a terminal experience inside that fedora container where you can run any sort of scripts you want It feels just like any other terminal. You're just inside of that container Um, you know, there's some other fun ones in the community. You know, we have a fun contributors got lxd terminal I see you ted Um, yeah, he created a snap of lxd terminal, um, which does a similar sort of thing Um, but so we could have those sorts of things, but I really think the future there is leveraging containers more On the desktop To give you that developer experience where you need the powers to run Any sort of thing you need to run you can do that inside of your container without affecting the rest of your system Like I said, it's not necessarily for tinkerers. If you like tinkering with your operating system That's running on your host system. It may not be the right solution Yeah Well, but again container workflows, right like you can imagine, um, a visual studio code Um, that's running on your system that has, you know Compiling stuff or whatever it needs to do inside of these containers um, you could have a you know Web development going on inside of a container and accessing it from the browser on your host So you can do that sort of thing in the container without tainting the reliability of your system You know, um, I think it's a I think that's the direction We really need to be looking in but again, some of those questions aren't completely answered Which is you know, one of the reasons why I want to you gather feedback for more folks, right to see what concerns people have And that's why we're not saying we're releasing it for developers yet. You know, we're Working our way there the things the problems that we know we can solve well And then get it to the tinkerers later on You okay, how ready is this for somebody who may want to package a different desktop I will say you could certainly do it now You may need to ping a few of us for some pointers Um, but I am happy to be that person if you want to try to package a different desktop Because so so far we've only done gnom But I would really love to have at least one other reference To prove that it works. So which desktop are you thinking? Okay, let's definitely talk about that Let's definitely talk about that Um, I suspect yours will be easier He's got an unfair advantage All right, any last questions Well, I'm sorry repeat again To change the shell Oh, oh doing that sort of thing. Yeah, so that that's uh another one of those things where the terminal would need to Provide the shell that you want. So like if you want fish instead of bash I will say our current images we're building do not have fish And they're probably not going to because some of that comes from that base os the root file system there That's as a snap, but you could absolutely have your own terminal app Packaged with fish included or Again back to the containers spin up a container App get install fish and you're done And when these containers are created like you can map in your access to your gpu your home Your home folder all those sorts of things can be mapped directly into your container. So it feels at home That's what I do today. All my containers are so eggy Okay, so any progress on running a container inside a container his reference case here is like what we do today would steam Right? So steam downloads a pressure vessel runtime and it basically Creates its own container and in our case we're running steam in a snap So it creates its own container inside of that. You have another use case. It's very similar You know as a generic kind of case that's complicated Like we've had to do some special stuff in snap d to make all the pressure vessel stuff work I would argue we probably could make the steam support interface a little bit more generic And make that work for what you're doing as well. So So Yeah, so precisely I think I mean maybe what we should do is actually look to make a more generic not not call it steam You know type of interface that allows all of that stuff that's necessary for pressure vessel And then it could be reused for snaps that are leveraging your framework. So let's let's talk about that a little bit I mean, I'm here today and tomorrow. Let's check I think it would be very compelling. So I'd love to hear more about that All right, I think we're out of time Thank you all. I appreciate it Hope you enjoy the rest of ubukan and enjoy your lunch Thank you very much for that. That was really good for me. I've been curious about this since Nathan first drew my attention to it. It was well done. We do have a one hour break Enjoy two o'clock back here for orchestrating your home devices with juju Check check. All right, sweet Hey there everyone. Welcome back from lunch. We're just going to give it a couple minutes here for more folks to kind of Pop in as they get back from lunch. So Excited to have you here for the afternoon session Thank you all for coming back. I hope you had a great and filling lunch We have some more talks for you now so you can fill your your brain So our next talk is charming your home network with juju and presenting that is alex slow and callan covax All right Thank you and uh, welcome and thank you everyone for coming to our talk Um, like you said, i'm callan covax. This is alex slow and we're here to talk to you about charming your home network with juju And a little bit about us before we get started alex and i are both software engineers at canonical on the starcraft team And as you can see by the wild card glob The starcraft team works on charm crafts rock crafts and snap crafts And these are all of course packaging tools used to package different things And just a quick overview of those snap crafts Which you're probably familiar with is our most popular tool for packaging iot and desktop applications Next rock craft is for packaging container images So this is similar to using docker to make a docker image except rock crafts makes oci images And most relevant to this presentation is charm craft Which is used for packaging server applications and the operations code for those applications And we'll get into more detail about that shortly But before we go any further, let's talk about you. Who are you? Perhaps you just want to run an application at home Like a media server and you're not particularly interested in the installation or the maintenance You just want it to work if this describes you then then juju and charms are not the right tech stack In this case just use the apps recommended installation methods because they work they have the most support and They're the easiest to use So we're not here encouraging you to move to juju if you have something that already works But perhaps you're someone who's heard about these technologies like juju and charms And you want to try these out at home and experiment with them But you're not sure if it's even possible Or what charms you would deploy at home because when you look online That's things like telco apps or grafana things that maybe don't make too much sense to do at home Well if you can relate with person two, uh, then you've come to the right talk We're here to show you how you can experiment with charms at home with apps that you would actually use at home In this presentation, we're going to demo juju and lxd to deploy some common home server apps So we'll show qubit torrent the torrent client Open vpn as a vpn client home assistant for home automation and jellyfin as a media server And we're also going to show you how you can charm other applications and how to contribute to the ecosystem Yeah, so a quick note on that Um juju has sort of Two types of charms that you can run you've got machine charms and kubernetes charms Which are roughly what what they sound like machine charms can be virtual machines machine like containers like With with xd they can even run on bare metal hardware just depending on on What kuban or juju backhand you're using Um kubernetes Trams are specifically for a kubernetes pod so a machine a machine charm you The charm runs as root in uh in this machine you uh, so You can install software you can add repositories and anything that you need The kubernetes charms have access to kubernetes. They can Uh, they can manage containers and And images and stuff like that. So it's it's a difference Between a vendor agnostic All of my cloud vm's as code and vendor agnostic all of my kubernetes stuff as code But juju can handle both of them And a few disclaimers as well. Um, first this is a proof of concept This is relatively new area of interest and in the charms we created for this demo. There's plenty of missing features And then second the tools like juju are certainly overkill for using at home And that's okay because that's not exactly the point And speaking of juju, let's talk more about it So juju is an open source tool to deploy integrate maintain cloud applications on any infrastructure And when I say any infrastructure I mean any infrastructure As you can see juju is designed to run on many different back ends and this isn't even a complete list And uh, it abstracts this so when you write a charm, there's no back end specific code to deal with The only exception to that rule is what alex just described is that there's two categories of charms the machine charms and kubernetes charms um For our deployments, we chose lexity as the back ends and which is a tool to run containers and virtual machines Some of the advantages of using this is that it's lightweight. It's easy to use Uh, but the downside is that it's integration with juju is primarily for local testing So some of the features available and other back ends aren't implemented for the alex d back end And now alex is going to show us how to bootstrap juju with alex d Yeah, so Top right corner in there. I'm going to be running h tops. Uh, so you can see What system resources it's running? I am going to do this inside of a brand new Ubuntu 22.0 for virtual machine So the first step is just Creating that virtual machine with four CPUs and and eight gigs of RAM And you'll see sometimes a 10x or whatever Flashing that's just because I don't want to bore you with my slow internet so We've got we've got this machine. The first thing we need we need to do is update to the latest lexity The ubuntu 22.04 comes with the lts version of lexity makes sense. So lts ubuntu comes with lts Lexity, but we do use some features That are only available in newer versions. So just Downloading that The second thing to do is To install juju and hey look, it's a it's a snap as well. So Pretty pretty standard in install so far. Um, I actually also have juju running on On this laptop, so so if if there are questions later on I can I can do some live demo there juju is a strictly confined snap So Comes with all of the the pros and cons of that So i'm initializing Lexity there and then literally just running juju bootstrap lexity This will create a This will create a machine inside of Your container inside of lexity, which actually runs the controller One of the cool things about juju is you don't need any additional Infrastructure to to manage these clouds it will manage itself from within Whatever cloud you're doing So That that was it we we have juju up and running on this blank machine Now what else is juju? It's an orchestration engine for software operators or charms and with that said let's talk about charms Imagine you want to deploy an application. There may be a lot of steps. You have to install it update configuration files and then enable and start services And then when it comes time to upgrade that application, uh, you'll have another set of commands to run stopping the service backing up configuration data and then Doing the upgrade and restarting And if you're doing those things often enough you're probably going to write a script for them Now what if there is a way for everyone to come together and collaborate on those same scripts to operate an application? The scripts would be configurable and extensible And what if there was a framework for those scripts to ensure that they were consistent and testable? Well, that's what a charm is a charm is just a package of these operation scripts and tests Uh, so in other words charms are packaged operations code for an application And this brings us to charm hub, which is the official repository for charms Uh, it's similar to docker hub, but for charms The source code for charms are hosted somewhere else like github or gitlab But the charms themselves are posted on charm hub and juju can automatically download charms From charm hub when you're going to deploy And finally we have charm craft charm craft is a tool to create build and publish your charms so charm craft can Create an initial template for your charm. It will build it in a clean and reproducible environment It will provide ways for you to test your charm and allows you to publish to charm hub And it has the same user experience and design language as snap craft and rock craft So if you're already familiar with one of those other tools, you can quickly learn how to use charm craft Let's look at what's inside a charm Um starting from scratch charm craft the knit just creates a simple charm from a template And first we have the charm craft dot yaml file And this is where you define metadata like the title and summary and description, but it's also where you define the build process charm craft organizes the build process into a series of parts for example, you could have a part that clones a git repository and Builds a go application from that and another part that builds a python application from code in the source directory here And then you could package both those together in your charm Um now even though you can do that you probably won't Charms are definitely the simplest artifact of all the crafts applications the star craft team manages So even though it provides this flexible and customizable way most charms don't do it They just will have a single part That uses the python code in the source directory here Uh next are the license and readme files and requirements which are self-explanatory enough. So I'll skip those Um, and then we have the source directory where the operation code lives And alex is going to tell us more about that in a moment. Um But finally we have the tests and when you first initialize a charm These are just stubbed out tests, but this emphasizes that charms are built With good coding practices from the start And coming soon charm craft will have a way to do more comprehensive end-to-end tests as well Yeah, so uh, this is a screenshot of my favorite text editor With the open vpn tron that i'll be demoing later It's It's just a pretty standard python class there the the only real magic is here in the in the init Where you use the operator operator framework that canonical provides to Tell it to observe certain events So juju is is for the event based in this case The events we're observing are the install event the config change event the start event and the stop event which is Feel like those are all self-explanatory and the and kind of the The minimum set of what you want to be able to do with the service Going into the the audience and solve Some might notice that that i'm literally using sub-process dot run to to do african update again this This charm is us basically a script that runs as root for in your machine So Open vpn is already in in the ubuntu repos for me I don't need to do anything fancy to to get open vpn this so Why not just build on top of uh of what's already there? Everything else is is just managing that's that system deservers and handling things When you pass a change configuration from Some client Well, some some administrators client machine Somewhere in the universe to the to this charm Everything else is just you know system control start system control control stop here I'm sure I'm sure everyone's familiar with everything that's happening here So we've got the we've got that charm written now. I'm going to Package and deploy it same same structure on the on the screen that I showed you before Yeah, come on network. I I think funny cats deserve priority. So I I totally get that If we try just going back Yeah I thought here I thought that uh connecting to the ethernet was a great idea, but You know what? Well, I was on the ethernet. I'm just gonna switch to wifi. There we go Okay, so I'm literally running charm craft pack in the in the directory This sets up a container that that has just a basic build environment So first time you do that it'll it'll take a little bit of time, but that container does get reused Talk about about a minute To do an initial build there for me. Then I add a model in juju For whatever whatever I want to call the model and deploy it I deployed it with with a I deployed cuba turned with with a bit of configuration This varies from charm to charm So you'll have to look at the charm docs, but that fired up a lexity container for for me installed cuba torrent and Set up the the web interface and everything I got to set my own username and password There and look i'm downloading ubuntu Um, oh, so so in that case I was downloading the ubuntu torrent. Sorry Um We do We support ubuntu lts CentOS 7 and almalinux 9 currently As the machines that that the charms can run on And but then It's it's per back end how how that gets gets downloaded So like lexity will download the ubuntu image once and and then we'll just use that to fire up new containers No, but we do But we will be happy to Except prs I I am the maintainer for for for charm craft. I'm definitely happy to to accept prs on that front so, um another example, let's say you want Now you've got your bit torrents client going you want to post some home videos for yourself and At and your family, you know Some some funny videos of your cats We've got jellyfin here already available So this is I made a model just each model is you know Things within a model interact things between models. There are additional layers to that to that interaction So so I made a model I deployed jellyfin from charm hub was three three commands and As soon as it In this case the charm actually Uses the official jellyfin After a app repository it adds that into the machine and then it installs jellyfin But but then in the end I've got I've got jellyfin there it defaults to using slash srv And we've also got some other charms online that that lets you Mount a network share to Slash srv so so that you can use your Home nest as the actual location for serving that Now one more thing is that that sometimes your isp rarely really hates linux isos And so they won't let you you use bit tarn because as we as we all know linux isos are the only known use case for bit tarned so then you might uh, so then you might need to Set up open vpn And put put a vpn client on it. Can you wind a little bit to? About 15 seconds in At post there. Yeah So in the in this case, I am Installing the open vpn charm And i'm telling it to deploy two machine zero which is the same container that That already contains My cubit torrent charm So when it's doing that it will fire up Open vpn just for just for that container you can Okay And It won't affect any other machines Are in juju or indeed the machine that's hosting it So so now I can once once again download Ubuntu, but Just checking that i'm using the tunnel interface And This will then Make sure the cubit torrent uses that tunnel interface that tunnel interface is being set up by my configured open vpn charm and so when I Download ubuntu again I'm very thankful to Whoever was seating that ubuntu arm 64 Torrent that particular day because I downloaded it about 10 times I I'm taking just a little moment to get a few peers there not revealing anyone's ip address, but um The this presentation and the videos will be available so you can check more thoroughly later on But all of this is going through through my vpn endpoint None none of that traffic is actually going directly to the stuff just To to prove that it's actually using the the vpn. That's kind of important All right, um and in this next demo, we're going to be showing home assistant The deployment is similar, but what we're going to show at this point is one of the limitations we ran into And With most back ends with juju if you want to expose your application To the network you configure the ports when you're building the charm And then you type juju expose when you're deploying it and that would expose those ports to the network Unfortunately that feature is not yet implemented for the lxt back end. Um, so we kind of have to do this manually And so here, um, I deploy it I immediately list my containers and I see the first one Which is where juju itself is running as a controller And then a moment later, uh, you can see it spawns up a new container, which is where it's going to deploy home assistant And All i'm doing is just doing a Back end specific command here. So i'm exposing a port For this container And then I'll just do juju status and wait for it to finish installing In this particular case, uh, kelhan used the home assistant snap Yes, and I will I'll talk about that too in a minute. Um, But yeah, uh, and then home assistant is right there. So I'll just brief through the setup and the log in and There you have it home assistant as a charm Um, and like alex just mentioned oops Home assistant is available as a snap. So, uh, one of the interesting things That you can do is if something is already snapped. It's really easy to charm Uh, because all the work in the details are already taken care of there in the snap itself So if you're interested in charming something I recommend you check if there's a snap for it already because You could have it deployed in truly minutes Yeah, easiest way to uh to make a charm is to charm a snap second easiest way is to charm something that's That's already in the repos Speaking of which We're starting a little organization on github. It's called charming cottage um If anyone's familiar with snap crafters, I'm sure most people here are familiar with snap crafters to some level um We're kind of trying to do the the same sort of thing but only for charms for your home For home stuff if you if you want to Charms something that you know Runs on big iron servers and whatever that's wonderful highly encourage it Charming cottage isn't the right place for it if you want to charm, I don't know pie hole And and run a charm version of uh of uh of pie hole for yourself One good luck. Uh, we both failed to uh to properly charm pie hole It does a lot of really confusing stuff But uh, but to Yes, this is this is the right place. We will be uh, we will be happy to uh to host that charm to uh to Provide help For you Any anything that you that you're wanting to do for the home use stuff Charming cottage is is there. We've got discussion Forums available and Yeah, if you if you want to play around with juju on your home network, this is This is probably the easiest way to do it and uh all the charms we demoed and more We already have on this organization So with that said, thank you Any questions? Yes Go ahead Thank you So just had a question about how uh someone coming from the enterprise Where we don't you know, unfortunately, we don't use a lot of You know fancy tools How would somebody integrate something like this? So I saw that you guys talked about azure so, um juju is It is basically a level above your uh your cloud if you if you will so Let's say you've got a bunch of stuff Both in azure kubernetes and there's a bunch of azure Azure virtual machines You can bootstrap a juju instance in your In your azure stuff and you can start migrating vms and kubernetes pods Into juju Which uh, which is a matter of making sure they're they're properly defined for uh for juju So that juju knows what's going on with them and then just telling juju and manage this By deploying the charm Easiest way to do that on it honestly is Get charms of of what you're doing Deploy a new and new machine with the charm Version of it that and and then just migrate over But that can all happen inside of that same cloud Yeah, and uh to build on that too. I think that One of the goals with this too is that it is hard to like bring something new to the workplace And so many of these like open source tools. It's things that developers play with at home And then they want to bring it to the workplace. Um, and Yeah, it can be very challenging. Yeah, especially with the bigger tools like this Yeah, the the point of charming cottage is to get you familiar with the juju tools and and stuff like that I in Honestly a low stakes environment because if if you Mess up your personal streaming server. Well done. You can't watch that at that home video again tonight, so I noticed when you were doing the the charm craft in it that I created a good amount of python files Is python the only language that is supported by that? so Uh, so, um, I'm going to say yes and no Python is the uh, it is the Most widely used way of doing it and like the operator framework is written in python for uh for python objects Stuff like that, but uh, but You know how an oci image is is just a It's just a fancy toggle with the different layers Uh, a charm is a fancy zip file With each of each of the events being any executable that can that can run on that on that system so we There are we have charms like that are still running within canonical that uh that are A set of bash scripts within a zip file and and then a couple of yaml files to give them at the metadata You could you can do that if you wanted to you could uh, you could write the actual event Software for your charm in go or rust or whatever and charm craft can pack all of that for you um It will be a little bit steeper steeper learning curve. I would Recommend that for one's uh first charms at least use use the operator framework and uh and uh What's provided but once once you're familiar um Please do weird stuff with charm craft. Please, uh, please break it in weird and wonderful ways and please Send me Many very detailed bug reports Yeah So yeah, I definitely will agree with your comment there where you can do like bash based scripts But basically, you know, you have like a dispatch script inside your charm and then you can write that in any language you want but um I do find the one benefit of like using like the operator framework and using python is that it's very easy for like other people Who are familiar with charms to contribute um because you basically have a common standard language for being able to kind of collaborate together So, you know start hooks behave the same way Configure hooks all that and even custom events as well. So One thing that's really interesting about charms is that they can basically share libraries. So similar to say like, uh Python package dependencies almost they can share libraries and then let's say like, you know You want to integrate like an identity server like you want to have I don't know Linux users for all your family members in your house You could then like take a library from like one of those ldap charms And then you could put it in like your home assistant or something like that And then they would be able to kind of speak a common language that allows them to be able to Configure in kind of mount the Yes, but I also did actually have a question But I wanted to know how long did it take you to put together all the videos and the charms for the home assistant and whatnot um The charms Took us a little while because we were experimenting with With a lot of stuff The videos that that that I did I did all of that in a weekend Yeah, we um We have tried quite a bit uh, alex spent a bit of time using open stack or trying to set up open stack for this as well um And I struggled to get open uh open stack running at at at home But then again, this is also someone who has managed to install gen 2 but not arch Or susan So I noticed you charm craft and it also creates some unit tests and some integration tests just some boiler plates for those Um, are those ran when you're creating the charm? Are those ran when you're actually deploying it? When are those ran and and what are the use cases for those really? So, uh, the full template for uh, that charm crafting and it gives you includes a Talks.ini file With that a pre preset to to run your unit and integration tests there. It also includes Static linting with rough and uh, my pie. Is it my pie or pie right? I think we use pie right on it. I'm not sure So so that's uh, so you would use talks to to run those We have an upcoming Feature that'll be a a charm craft test command Which will then run not just Those unit and integration tests, but uh developer defined end tests where you have the packed charm and uh, and you're running That inside of a juju environment Doing certain poking at it. It's uh, et cetera to uh, extend on what uh, what jason was uh, was talking about to, um I am currently trying to figure out how to Detach jellyfin from its database so that I can use the charmed version Of my sequel or postgres as the database for uh, for the jellyfin charm And I'll be used to be using them the the database integration Libraries for that so that I can you know run the run the database on a separate machine. Yeah Uh, the more detaching of such things that you do Aren't you running into the possibility of getting to situations that the original vendor had never in in their life tested for? I am um, this this was this is why we get uh, we gave that caveat up upfront None of the none of the projects that we're doing that we're doing this for Or Maybe one of them's aware of it. We certainly haven't Told any of them. Hey, we're trying to do this and we're not trying to make this the official way to do it I'm uh, I'm doing that for my own interest in And using that for uh for jellyfin not I've heard that home assistant people can can be rather belligerent against people who are trying to repackage their software Yeah, we uh, we've we've seen that and we're uh, what we are trying to work to work with on things that are as big and complex as uh, Home assistant and as pi whole is rather than rather than trying to repackage What what they do? Every time work work more on providing this charm that will use the their official packaging if at all possible so I've got a work in progress for for pi whole that actually Uses their install script inside of the charm Um The their install script is very much designed to to be user user interactive and charms are very much not So i'm doing some weird hacky work to to work around that but but The the the preferences is definitely Try to use as much of the official stuff as as possible And charms are a way to wrap that into Something that is fairly standardized Any other questions? All right. Well, oh, thank you very much everyone. Thanks folks Okay, so we have a bit of a break here and at three if you wanted to learn how to use uh Superpower computing to do science you will be in the right place. We hope to see you back again here at Three so get some water stretch your legs. I'll see you all soon Since there's a bit of an extended break if you want a swag We do have some more stickers and I think a couple mugs and journals Now's Now's the time Yeah, it's on. Okay. Hello. Hello. Can everyone hear me? Okay, sweet Okay, so our next talk is accelerate your time to science with the boon to and open on demand And this will be given by none other than our own jason Neutroni All right. Thank you. Nathan. Um, so why don't we uh get started in here. Um, first of all, thank you all for being with me this afternoon Um, so yes, the title of my talk is accelerate your time to science with the boon to and open on demand So first things first. I'm just a little introduction. Um, if you weren't here for the impromptu sort of talk that I gave this morning My name is jason c. Neutroni. Um, I am an hbc engineer at canonical And I am one of the not so ancient elders of ubuntu hbc Um, and so if you're kind of you know wondering what is you know ubuntu hbc You know, what does that acronym mean? Um, at least for the first two years You know, I was like, oh, everyone knows this but then I quickly got reminded that sometimes when you don't have the proper context acronyms are a bunch of nothing so Give a quick little definition here. Um, what is high performance computing and you know, why exactly should you care about it? It is a paradigm where you utilize, you know, supercomputers Computing clusters and grids to solve advanced scientific challenges. Um, and so, you know, you would think like, oh, you know What industry specifically leverages this technology who uses it? Well, it's pretty much everywhere almost, you know It's like you actually probably come into contact with like an hbc system or something that was trained on an hbc system every day. So, um, some examples of possible industries where hbc is really popular It's like aerospace so they can use it for like modeling jets and whatnot Agriculture, um, they can kind of use it to predict weather trends. I'm kind of understand, you know, how to plant crops You know, what are potential threats like, you know, if there's a wildfire, you know, mass flood or something You know, how much could they potentially lose and what is the impact that it could have on the local food system? Um, and then it's also used in finance as well So it can be used for like advanced prediction, um, fraud detection And then also just kind of modeling potential future financial trends based on kind of various compounding factors And then if you see that qr code there if you go ahead and scan that that will just kind of take you to A page on the ubuntu website. Um, that'll just kind of go over some more Use cases of where high-performance computing is, you know, really popular for use And so then the next thing that's probably like is what does ubuntu have to do with high-performance computing? And what exactly is ubuntu hbc? so here Ubuntu hbc is one of ubuntu's newer community teams We've been around for almost a year now or one of your birthday. I think will be in may If you read the text on the front there, um, basically if you go to our team page I think it's on both launchpad and on the ubuntu community website Um, it's just basically a definition of what we work on and so mostly our focus area is You know, how do we make ubuntu a better operating system for doing high-performance computing? Um, and to kind of give a little bit of context about how this community team, you know came into creation It actually started at the 2022 ubuntu summit Two organizations met obviously the first one was canonical And then the second one was this other organization that specializes in doing high-performance computing on ubuntu Called omnifactor And so at that point canonical, you know, at least me we were working on Doing hbc. So we were doing packaging writing some juju charms for from the previous talk also Once again packaging just lots of packaging And then, you know, we kind of got talking, you know, we kind of realized that we had a lot of common challenges And then at that point what you see on stage left here Is uh ubuntu hbc. That's kind of our community logo right there We decided to start a community started a community team that was kind of open for anybody To join and contribute So you might be asking yourself, you know, if you're thinking back to the title of the talk, you know What is open on demand have to do with um ubuntu or hbc? so You know When you think of a super computer you look at a picture here, um, and you can see what do you see you see a lot Of computers. Um, and so super computers are massive machines So you think your laptop will think that you have an entire building dedicated to just running a Single computer. Um, it draws many megawatts of power You actually have to build it in very specific places because not every power grid can support running an hbc system Um, and you have thousands of compute cores, you know, you have thousands of people who are all using resources on that system and so then Maybe some not so humble artwork here. Um, this was made by a friend by the way. I didn't you know make this myself But you think like oh, I have access to such a powerful system that you know, I'm basically going to be unstoppable You know, I am going to create things, you know science and math or I am masters of that. Um, and nobody can stop me But an often problem that I find um, at least you know, my early years of hbc when I was in consultant is that Most people they start out feeling like this and what ends up actually happening is this So yes, so why do most hbc users even though they're and don't get me wrong They're very bright and smart people, you know, they're like phds, you know leaders in their field They know how to do everything, you know, they're masters of You know biology they're masters of genomic analysis and yet we all kind of have this patrick accidentally nailing a board in said moment And you know, I'd like to maybe make the assertion about why they struggle to use supercomputers is because of this Or actually wait a minute. They got a little outside. Okay, so hbc is everywhere um and so You know yet a majority of individuals struggle to get started So why are some possible reasons why new users could, you know, struggle to use hbc systems? So I kind of alluded to this a little bit earlier Um, but what I wanted to specifically mention is, you know Some maybe naive reasons of thinking or at least, you know, easy Areas the point at um, so the first one could be is like, oh do people just struggle because it's a different tech sec so You know most maybe modern computer scientists are very familiar with say like the python programming language But in hbc, you know, language is just like c and for trainer much more prevalent and much more used Um, you know, you could also maybe, you know, make the challenge that there's a lack of adequate user New user training. I often find that most hbc institutions actually have very excellent documentation How to do at use cases, um, you could also maybe challenge is like Maybe they just don't understand the problem. They're trying to solve and they don't understand like You know how to, you know, correctly model that on the computer You can maybe say it's like the system too advanced for these noobs Or you could say like are they just lifelong windows or max users and they're just not familiar with the linux environment Or they're just not very smart And so what I kind of gave a little bit of a sneak peek to um in the Or the next slide, um, is that I'd like to make the assertion that what the problem really is is this the terminal And so you might look at the terminal and say why does you know Why is this guy up here on the stage think that the terminal is the problem? Well, maybe a rough show of hands You don't have to raise your hands if you want But how many people would get the idea that they have access to a super massive computer that is capable of solving The most advanced problem that you can think of if all you see in front of you is a black box That's telling you the type something Who thinks they could figure that out? Okay, so all right. Who's been using linux for like 10 years show of hands. Oh all the same people So Yeah, yeah, so you don't even know a computer you're talking to from that prompt and that's kind of a problem is that It's very hard for the user, you know Or a new hvc user who doesn't have 10 years of experience with linux like most of this room here They're just trying to get something done because either a you know the principal investigator or even their boss or even maybe they're just trying to Test something out try something new and they just you know, they get access to a super computer and they say like You know type these things, you know, click a couple buttons and like say putty or another ssh program And you just get given this black box And so that's kind of what the problem is is that you know, you can Look at it and say in the beginning There was the shell and so now here's the case against it And so why might the shell or the terminal not be The best entry point for new hvc users. Well first you can kind of think of affordances And so if you're not exactly familiar with what an affordances It's basically, you know a device, you know, maybe say like a phone or a laptop or even your car You know an affordance is like it telling you that it's capable of doing something So you think like a steering wheel, you know, you can steer the car with the gas pedals You know you can accelerate and so with an affordance, you know when you go into the terminal What it tells you is that you can type something but what it doesn't tell you is what exactly to type How to efficiently type it or how to do anything correctly And then the second thing is too is that, you know, the terminal can be kind of a learnability challenge And why can it be a learnability challenge? Well, if you don't even really know what the terminal is like, you know If you're maybe like six years old and just open like the terminal, you know Your mac laptop for the first time or something The shell and it's complementing scripting language doesn't necessarily Teach you anything right off the bat And so the idea is that if you want to learn something new you have to know The right places to go like you have to know how to open manual pages You have to know how to use the internet how to search, you know You need to be able to go on the stack overflow and you know Who's blowing smoke and who actually knows what they're talking about And so how do you learn what you don't know if you don't even know how to learn what you don't know And then the next thing you could also kind of argue against the terminal is consistency And so the issue with like, you know, the terminal being consistent Not necessarily say the user interface of it itself But if you commonly look at like a lot of command line applications You know, you find that depending on the framework that they were written in They kind of a very specific style guide So, you know, I guess kind of the low hanging fruit here that, you know I could kind of pick on about usability is that not every CLI application kind of has the same interface for just kind of getting the help information So, you know, some commands, they only support like the short option dash h You know, some only support the long option like double Help and then some, you know, they actually have a built-in help command that then offers extra, you know Information for the sub commands But it can be very frustrating when you're first starting out and you think like, oh, every application takes this dash dash help option And then some applications don't even actually offer a help dialogue And then the last thing is, you know, kind of visibility and, you know, what do I mean about Visibility in the terminal, I basically kind of Allude to it as it doesn't really show you what to do next So let's say that you go online, you get something correct like how to read a file Well, what do you do after that? How do you copy that, say, into a clipboard? How do you, you know, maybe pipe that out somewhere? How do you further process that using like grep or something? Or how do you, you know, move data around from one directory to the next? It doesn't really tell you what to do And so now that I've kind of maybe brailed against the terminal a bit, you're likely asking yourself You know, so what exactly is open on demand and what does it do? Open on demand is a interactive portal for accessing HPC resources over the internet or through your web browser And so essentially what open on demand and if you scan the QR code there, it'll take you to their main website Is it is a web-based application that kind of moves beyond the terminal that I like to say And it kind of now focuses on providing a visual interface for taking advantage of your HPC resources So okay, I saw some people scanned it, but now Yeah, basically what it is is a web portal And so what I have here is kind of some example screens of what open on demand specifically is so for here The best one that I really like to show is that it's just basically an interactive Interface for figuring out what's going on at your cluster So here what this screen is showing is like listing jobs So typically in HPC systems, not everything is totally synchronous typically Will happen is that you submit up to say like a scheduler or something And at that point then that scheduler is responsible for deciding when to run your workload based on available resources and other constraints And so in that case what this screen is doing is effectively showing you where those jobs are but Now if you did it say in the terminal, it would just basically be like your job submitted And doesn't tell you anything else it doesn't tell you if it's running It doesn't tell you if it's queued up It doesn't even tell you if it's actually good enough, you know It could get totally rejected and then you're just kind of left wondering what happened And so, you know, there's there's there's quite some several benefits of using an application like open on demand for accessing your HPC resources So for example, the first thing that I really like about it personally is that it has support for interactive applications So it's not just open on demand that you're going ahead and running on your system So for example, you can get graphical interface to a lot of really popular data science Applications, so some of the examples I have listed here is like jupiter You can use open on demand to queue up jupiter sessions on your super computer You can also use it to queue up a vs code server So for example, if you want to develop code directly on the system, so If you're like someone here is really flexing their h100 gpu collection on me I saw him laugh you can actually go ahead there. That's how I know it landed But uh, you can launch your vs code server directly on that node and then take advantage of those resources right there And then there's some other popular applications as well. So for example, there's like pluto.jl Which is a similar application like jupiter, but it's dedicated specifically for the julia programming language There's also like shiny Which is like web apps through the r programming language Which is really popular in data science and then you could also like look at like r studio Which is like an ide for the language and then matlab, which is a popular application among Uh Scientists as well And so then moving over to the next category of benefits here You also get to take advantage of say like interactive desktops. So one thing that's really nice about Open on demand is that it gives you the ability to get graphical sessions through a vnc on directly on your compute nodes So if you're not really a terminal person if you're not interested in going You know through typing in a bunch of text and kind of you know being a bash scripting king Instead what you can do is you can actually launch a desktop session directly on your cluster And so in this case, um, some of the examples that I have here, um, it's like Default ubuntu so you can run default gnome. Um, and then it also has support for two other Desktop environments, which is like x fce, which I've also commonly heard referred to as x-force I don't know if anyone else calls it x-force, but that's what I've heard sometimes and then also mate. So Does anyone actually call it x-force or Nobody calls it that nobody calls it that if I just been wrong this whole time. All right, okay Yeah, and so then the the the last thing here the last benefit that I really like to say is a scheduler integration So you could maybe argue that schedulers is a bit more specific Hpc knowledge, but one thing is that everybody kind of has their preferred scheduler that they like to use so some of the Logos that I have up here the first one is for lfs. Um, that is a workload scheduler that is produced by ibm And then there's slurm, which is based off of the drink from episode 10 of season one in futurama I'm able to read that back because I get asked that a lot. Um, what is slurm? And it's like it's a simple linux utility resource manager But they prefer that they don't call it that because they want you to call it the slurm workload manager now And there's open pbs, which is a open portable batch system. It's similar to slurm, but it has some difference in tactic differences Um And lastly it also integrates with kubernetes So if you want to be able to run your jobs on your kubernetes cluster You just basically configure your scheduler there and it'll go ahead and launch a container up. That's pretty neat And so now that I've kind of you know, maybe gone through the benefits of using open on demand a little bit. Um Oh, jeez. I just noticed that typing error, but it's sorry It's like, uh, how is open on demand important to the bunch of hbc community? And so, you know to give a little bit of historical context here If you kind of look at this diagram that has been through a couple iterations here We as a community and ubuntu hbc are currently working on developing this Charmed application called charmed hbc. So charmed high performance computing And so exactly what that is is that we're looking to leverage, you know, say the work of Callahan and alex here using, you know, juju and charms and all that to be able to deploy a fully functional High performance computing system practically basically wherever you want it. So it could be on like azure cloud It could be on aws or even on a local lxt instance on your laptop and so The current this is kind of the current architecture of charmed hbc So as you can see here kind of on stage right The first part is traditional way of accessing it is via the ssh protocol You'll basically go into say like a login slash head in node or a head node Um And then that's where you'll see kind of that terminal experience where you're now on a host And then you have the challenge of knowing like, okay, what computer am I running on? How do I leverage those use resources and how do I even know that i'm leveraging the best way possible? And then after that, um, kind of from that login slash head node there, um, you're kind of then interacting with the job scheduler So in this case, um, we have chosen to use slurm as our job scheduler implementation And so from basically that login slash head node, um, you'll submit jobs up to slurm From then which at that point, um, typically what's slurm will then go ahead is, you know Take inventory of all the available compute nodes, which are on kind of stage left here Um, and then go run the jobs on those and so compute nodes can be typically composed of very different resources They can be um homogenous or heterogeneous depending on kind of the clusters needs slash architecture Um, so for example here, we have like this back package manager Which is a like user level application that can bring in all sorts of different packages Quite a great community. I love working with them And then it can also have some optimizations for like network and whatnot So you can choose to use like high speed ethernet or you can also choose to use something like incentive band and so then Now that we've kind of looked at what specifically slurm is, you know Now we have some other auxiliary applications that we have as well Um, so for using users, um, we use ldap. Um, that's just basically I forget the acronym off the top of my head now that i'm on the stage But that we use that for kind of providing the identity and access management So that's how we're able to Um, you know make sure that users are the same person across every node on the cluster Um, and then typically we'll also have some kind of storage implementation parallel storage And that's basically how we make sure that um files are available on all of the compute nodes So, you know, let's say that somebody goes into the login head node and they submit A job and it goes to compute a But then they also have you know another resource Or another job that they need to run that also needs to use something that their job that on compute a is using If it goes on compute be it'll still be accessible and all that And then you might see my sql up there kind of in the top there What my sql is mostly for is just kind of collecting um cluster data and usage data And then that way it's like if you're a site admin or something You're able to then just execute queries against that to see what your clusters overall usage is or find out If you're not charging your users enough And then lastly, um, we kind of have two components We have cause which is kind of an acronym for canonical observability stack And so that's just basically a suite of tools for Collecting metrics and kind of viewing metrics about your cluster. So for example We could collect health information about all of the compute nodes pipe that in the cause and then you go to like say Gravana dashboard and that'll tell you everything that you need to know about that compute node And then the last thing is that we have Moz which is like metal as a service And that's basically our backing cloud implementation that we could use We're able to slot pretty much anything in there, but in this case Moz what it does is it allows us to request like bare metal resources And so now With the sunglasses and finger guns emoji charmed hbc with open on demand And basically how this implementation is different is that instead of having that Traditional ssh login head node where you have that shell environment Instead now you kind of then have another entry point that's through the web using open on demand So basically you just have the firefox logo there You go into open on demand and then you're still able to kind of fully leverage your Systems resources so you can integrate with slurm, you know, you still have the same users You also have access to the storage that the cluster is using Yeah, so it's pretty nice The next question is is that now that I've done a little bit of talking about open on demand and understanding So now where does the work currently stand on integrating open on demand into Charmed hbc So this is kind of something that we've been working on as community is getting this application There's quite a lot of steps open on demand has pretty complex architecture So we've been making steady progress on it. Um, and so yeah, basically, you know, the short of it is that we're hard at work Still working on it yesterday when I was traveling here so A couple of things that we have done and some things in in-flight And then there's other things that we're still kind of hammering out and working on So the first thing is that we have a functional snap recipe So we're able to get a successful building snap package that has all of the services that open on demand requires Later in this presentation, I'll show it but it requires like three different web services It's a bit interesting how that works and we had to figure out how to cram all three of those inside a snap and then also We worked on developing some abstractions for configuration Specifically it has a lot of yaml configuration files I think you can do up to like 100 depending on how granular you want to get in like The config dot d directory So we developed those abstractions to make it really easy to do that using snap ctl So somebody wants to basically say like oh, you know, I want to be able to I want like jobs to have this lifetime or something you can directly configure that through snap Um, and then we also have a working apache htp integration for serving the web application itself open on demand Um, so open on demand uses apache as the primary entry point And so we have that working now so you're able to kind of start getting to some of the main interfaces And now we're kind of have the in-flight parts and some of these are a bit complicated So the first thing is that we have to now add a um open id connect integration So basically that'll be for kind of the 2fa slash login service So we have ldap working But we need to be able to kind of be able to authenticate our users as they access open on demand So working on that, um, you'll basically be able to use any open idc provider We'll be starting with dex, but then probably looking at like orihydra And then also, um, the other part that we're working on now is kind of the nginx Um integration for running like user confined web applications So for example, like when open on demand schedules, uh, and runs jupiter Um, it's using nginx to basically proxy that back to the user Um, and then it's confined as that user on the host. Um, so that They can't just like, you know, be like, oh, I'm root now and I'm going to delete all your data And then, um, two things that we kind of have, um, slotted off for the future then Is that we want to have initial support for a base set of interactive applications So there's a couple of defaults that they recommend, um, specifically like jupiter, um, vs code server And then they have some other default apps available as well as kind of reference point And then we also want to develop a juju operator, which is basically a charm And then that way it'll be really easy. All you have to do is like juju deploy open on demand And you'll have a fully working implementation And so now one thing is like, okay, so why is like this work still in progress? Well, you know, I just want to give a little bit of an overview of the architecture of open on demand So on the kind of Stage left side here, um, you'll basically be looking at the auxiliary services So these are things that are kind of outside the domain of open on demand But things that open on demand integrates with And then on stage right, um, those are components that go into open on demand So kind of going from top to bottom here If you look at the first part, um, you see the front end proxy And what the front end proxy is that basically the Apache service, um, that serves up the Ruby on Rails application that, you know, you kind of work through a user would work through to Request jobs, you know, log in, um, they can even get an interactive terminal if they wanted it And also just like view status of services And then kind of then beneath that, you have the per user aspect of open on demand So in this case, what you have then is like that back end proxy That's where the engine X service that we're currently working on comes into And then you have the application runner, which is a fusion passenger That's like a Ruby application for starting it And then the application itself, kind of all the way at the bottom And so per user, what that is, they commonly refer to it the open on demand upstream As per user engine X process Is it is responsible for basically queuing up user jobs, you know, starting Jupiter But then running that kind of under that user's namespace And then that way, you know, they can't accidentally access or, you know, maliciously Access someone else's data and that's the way, you know, everybody is who they're supposed to be On each of the compute hosts And then kind of then on the stage left side, you'll see the client The authentication, so site specific, basically that means it's like whatever Identity and access management platform you want to use So you could be like LDAP or you could be like Active Directory The HPC scheduler, as you saw kind of in an earlier slide that would be Slurm And then like the nodes themselves or the network file system For providing the parallel storage implementation And so now you might be wondering, okay, so like where is our work coming in here So kind of on the stage right side here This is the, those are the components that are going inside the snap So basically the front end proxy, the back end proxy, the application runner And the application, well not necessarily the application itself But parts of the application have to go into the snap So the first part that we have that's really working nicely Is the Apache front end service Or the front end proxy that's working quite nicely So currently now if you install the snap You can kind of be able to start navigating to some pages And then what we're working on is kind of that per user aspect Which is like the back end proxy and the application runner But we do have engine X and both passenger buildings successfully into that snap And then basically now the parts that are highlighted by kind of for Juju on stage left Those are currently, we have those available So we have a scheduler implementation available We also have authentication available We have the node hardware itself And then we also have the storage implementation So really it's about getting that kind of front end part working inside the snap And so there's been quite a few lessons learned during this process I've at least been working on it since, what was it? Since we released Mantic Minotaur So back in November So kind of the first title here You know, I guess a little bit playful maybe But you know, I ain't met no bug, I can't squash And so what we've learned along the way You know, maybe a bit bold there But so, you know, kind of the first major lesson that we learned Is that we needed to bundle a custom Apache with open IDC inside the snap So if you've ever used like the next cloud snap You'll kind of notice that they ship their own Apache instance And so we have to do the same thing And so the main challenge with that is Is that Apache from Archive does not stage correctly Using Snapcraft Which I don't think is actually Snapcraft's fault So don't worry about it But yes, so it doesn't stage correctly So there's some like, you know, at the alternative scripts That don't run right So we have to build it ourselves And then we also have to add the open IDC plugin as well So that open on demand is happy when users log in Another thing that we also found Is that we needed to build a custom passenger in Nginx And so the main reason for that Is that I believe the Nginx that we have currently in Archive Only has the Apache integrations And the other thing is as well is that Nginx itself Doesn't support dynamic modules So you have to compile them in statically So that was something that we had to do So basically in the snap recipe We have to basically pull down passenger And then we have to go inside passenger And then we have to pull down Nginx And basically then we compile Nginx And point to some paths that are provided by Passenger And then the other thing that we found too Is that Debian rules files can be your best friend For understanding how to properly stage and prime things With a snap So I find that usually if I'm ever snapping something That's also available in Archive I just basically unpack the Debian package And then just look at that rules file And then from there I kind of have a good idea of like Okay, what needs to be in a configuration script What needs to be in an install script And where do files need to be copied inside of the snap image And then also another thing is that Source code can be the best source of documentation I know a common problem is people like Oh, the documentation says one thing But the code actually does another thing And it's like what? Documentation is wrong Yeah, so in that case what I really found Is that I am very thankful that Ruby is so easy to read Oh my god So we had to take advantage To make open on-demand work inside the snap We had to use some undocumented environment variables So that might be interesting later down the line If we go to the upstream maintainers and say Hey, we're using these features And it's like, oh, you weren't supposed to do that And it's like, well too bad So it's all good, it's all good But we found that source code Can be the best source of documentation Really because you can quickly identify If like, oh, we need to author something ourselves Or if maybe there's potentially an opportunity For us to make meaningful contributions upstream Or even at that understanding like Oh, the full feature set of the application So we don't need to modify it in our own way And then the last thing is that Client libraries can really help reinventing the wheel And make it easier to manage various components So if you actually do scan the QR code here This is a small library that I offered That basically just kind of wraps Like a couple of core YAML file Configuration files that Open on-demand has And basically makes it really easy to Kind of be able to manipulate that information And so how it's being used inside the SNAP Is that basically if I want to say like Oh, I want to change like the server name For this instance or I want to do something else All I have to do is just basically do Snapset on-demand And then just the configuration options I want to pass And so now for a short demo Basically go really basic implementation Of what we have working now So I'm going to pop over here, exit out Can I escape? I don't want to do that There we go I'm going to pull this here And I'm going to make the text bigger in the terminal Because I know that my text is too small Sorry, I'm always like You pre-record a video and then you're like Oh no, I made the text too small, it's all grainy So here we go Everybody read that? Is that good? Okay, sweet Thank you So first you'll see here on my terminal It's just basically a simple biobu instance I hope I said that correctly But what it is is that I'm currently In the open-on-demand project So if I do like LS here You should see some files So, you know, typical Git project It has the license, it has the readme It has the helpers aspect Which is basically just the configuration And installation hooks for open-on-demand And then the snap directory that defines the build And then overlays is just some custom files That I basically copy in when I build the snap And so for the sake of not trusting conference internet I already have a pre-built copy of the snap I've been burned before where it's like Oh yeah, I hope you don't mind having a megabit a second Which ended up not being great So I already have a pre-built snap And so what I'm going to do here Is just do a little reverse search And push it into a example LXD snap that I have Now we're going to clear that out And then just I'm going to shell inside the snap real quick On-demand test There we go So now I'm in on-demand test If I do the simple ls I should see, oh yeah, it's available And so now what I'm going to do Is I am going to do a snap install So snap install And then should be able to tab autocomplete Nice And then one thing is that I'm going to do dangerous I'm not basically just signals that Oh, I want dangerous, not angerous Daggerous, not that either Okay, there we go So we have dangerous That just basically says that we're installing a local copy And not something from the store And then classic environment And so the reason we have classic confinement Is open on demand actually qualifies Due to the fact that it is an HBC workload Orchestrator So it manages workloads Not the machines themselves And then there's also some aspects as well Where it needs to be able to drop privileges To run as the confined user So basically when you start engine next stage Or you run another job It needs to be able to assign that process The user is effective like UID and GID And so that's why we have that So maybe Ken could prove me wrong But yeah, he did that So I guess I'm in the clear But we do the snap install So hopefully everything goes well Doesn't break You can see mount snap on demand I also pre-downloaded a lot of the core snaps That I needed as dependencies So the snap is built itself off of core 22 So it uses all the libraries from there And then also just like having snap de-available So now if I go ahead here I should be able to run a basic command That does open on demand Or on demand update portal So that's just basically the front entry point So you can see there It just basically quickly generates an Apache config If you actually want to look at that I should be able to change to the snap directory I'll clear that so that everyone can read it VAR, snap, on demand Common, here let's go And then uh-oh It's getting long here Uh-oo-oo-dee Ooh, not PPP, oo-oo-dee And then config To be able to look here And then you'll see Oh, what's the oo-oo-dee portal YAML file I should be able to last that Hopefully less is available There we go And it's just basically a YAML file That specifies a couple parameters here So for example You could say like Oh, where to like put the access log For when people log in You can also specify If you want to collect user analytics as well We all know how everyone feels about Not being able to opt in to telemetry And then we can have some user reg access And then just like a couple common like URIs And redirects as well And then if we change up here Real quick And then we just change into the Apache 2 I might have spelled Apache wrong Change directory Apache 2 There we go And then we go into comf.d Oh, I think I Oh, I did not Okay, and then we Read the portal file here This is the automatically generated Apache access file In this case, since I have not set up OIDC It does the rewrite engine And then basically it will navigate the user to a file That says, hey, you need to set up authentication So people can't just log into your cluster And indiscriminately do things So now if I exit out here And I control L and then I go back to home I can actually go ahead and start on demand here So I should be able to just do snap start on demand And I should say started And now if I grab the main IP of this server Which is 10.6.29 Let me pull over a browser window real quick I heard somebody laugh at me There we go And now I type in the IPv6.4 There, welcome to open on demand It started successfully It's signaling to you that you need to start And then basically at this point here The part that we're still kind of working on Is being able to get that authentication information inside the snap And then basically then get it started Where it will take you to like a login page So for example, just like username and password And then even like a 2FA So you could like go on your phone And be like, yes, this is my login attempt And so one thing that I do want to point out here is There is a bit of a spelling error there So I actually did make it pull request to fix that Currently it is not released in a bug fix So they're still working on the next release But once that's available I'll be able to rebase the snap on that new version And that spelling error will be fixed Which is pretty nice I'm a contributor Okay, so now going back to the slide show here Let's go now after the short demo And so now you might be saying after that I've showed you here what happens after this This is kind of a rough timeline So this is like the ubukan at scale logo that we did Which is like the penguin waving inside the circle of friends I quite liked it You can credit goes there and for that And then I have a picture of a numbat For noble numbat release And so you can kind of look at it And see like, oh, what are we specifically working on? So the next steps that we have is We're going to integrate nginx stage Which is basically that utility For starting the per user nginx process For the interactive applications Need to get that working The next part after that Is getting the open idc integration going You can actually use open on demand On authenticated but it's very clear That it's unsupported And not an intended way to use open on demand So I would not encourage it And then after we kind of get that core experience working Then we're going to publish the snap to the snap store And then that way people can start Kind of pulling down edge release We basically just don't want to publish it yet Until it's actually working Because the last thing we want is Everyone to pull down the snap And be like, your snap sucks, it barely works And it's like, oh yeah, it's not finished yet And then the last part here Is then creating a juju operator To make it easy for site admins And charmed hbc users to effectively start Open on demand and easily bring it into Their clusters with minimal configuration And so now if you actually want to see the source code Because you know it's open source You can go ahead and scan that qr code there I will apologize that the readme Isn't exactly my best work We'll update it later But yeah, if you want to kind of start Getting like a preview look at like the work That's going on in there You'll be able to scan it See the snap craft yaml file Some of the other Python scripts that I'm working on For like the configuration and installation And client libraries as well And you know, kind of maybe the last thing here Is a call for today So as this is a project of ubuntu hbc If you are interested in potentially contributing Using it or you're just kind of interested In tracking its development progress I encourage you to join ubuntu hbc If you scan the qr code It will take you to our community page On the ubuntu website And it'll just basically give some information How to get started as well as, you know, ask an FAQ Because the big question we get is like This is really interesting But I'm not in any way involved with hbc Can I still join? And the answer is yes So yeah, if you have any questions also Yeah, feel free to get in touch as well That's pretty much it for my presentation And thank you for listening Yes, we have a microphone there So I'm not sure about this Is this intended to be a user interface To existing data analysis programs Or are you hoping to take over Some of the data analysis yourself So I'd say it's more meant to be like user facing So it's basically a web portal That allows them to effectively manage Like the resources that they have available In their cluster So for example, somebody deploys The kind of termed hbc core components Like the slur and workload scheduler Effectively, if they were to deploy open on demand They would then have an interactive web interface That allows them to access those resources Rather than having to say Because I would doubt you'd want to take on Replacing those data analysis programs Many graduate students died For the sake of creating those programs Over the years And so if they have 20 years of understanding That this particular set of packages works well They're not going to easily move to something else As a user interface, you're probably okay Yeah, so for a user interface, it's definitely fine And so maybe one thing I didn't really touch on That I could have a little bit more Is that you can actually create your own applications Then integrate with this front end So for example, if they had, say, these codes That they've been working for 20 years Like, oh, they have a GUI program That they click through and allows them To kind of access maybe some libraries They wrote for data analysis They could write their own interactive application That plugs into open on demand And then at that point, through the web interface They can just basically go ahead and start it Clicking through a couple of buttons and launching a session Is it good for interacting with things like NumPy and SciPy Because those are some of the main things Yeah, yeah, so it can be really great for that If you're looking for a development environment To use like SciPy or NumPy You know, you could basically just install Inside a Python virtual environment And then access that virtual environment Through, like, the code server You could also install it into your IPy kernel for Jupyter But the nice thing about it too Is, like, if you actually develop through open on demand And then want to, like, submit your jobs Through, like, the traditional way Using, like, the batch scheduler And all that, you can still do that Because those libraries will be installed on the system Is that kind of okay? Yes I noticed that you had a use case where The on-demand user would essentially go straight to Of node Yes By passing the job scheduler And wondering, like, what kind of use cases That are you thinking about with that As, I mean, is there any interaction With the job scheduler To know that you're using these nodes Is it the idea that node's not in the cluster? Or... Yeah, so the one thing that's nice About saying, you know, potentially having, like, Direct access to the node I guess maybe let me walk it back a bit So the reason why, like, the job scheduler Kind of exists is that typically When you have a lot of these resources, you know A, users maybe don't effectively know How much, you know, resources they need So if they kind of go onto a node That's, like, super powerful Like, say, like, an H100 or, you know, Massive core system They're gonna be like, well, I'm gonna use all those cores Even if I need only one Because it's performance, right? And so basically what the scheduler Kind of helps with that Is kind of partitioning out those resources And constraining and confining So then that way, what we kind of try to reduce Is the issue of, like, noisy neighbors, essentially Where it's like, oh, if you just give everybody Direct access to all the same nodes You're gonna have the problem where they're all Gonna want to use all of those AD cores And then they're gonna be like, why is my performance degraded? Why am I getting all this, like, incorrect results Or having all these issues And it could just be because, like, the node is oversubscribed So that's kind of the problem And so the one thing that's nice about Open On Demand Is that it integrates with the scheduler So, like, when you say, like, Request an interactive session Like an interactive desktop It'll actually make a request out to the scheduler And then it will wait to start your session You know, depends on the cluster utilization For how long it takes for that session to be granted But then once it is, the scheduler will actually go ahead And then it will execute a command Which is, like, that nginx And so then it'll start the nginx And then basically it gives a URL to the user That they can click And it'll take them directly to that session Running on that specific compute resource Yeah Any other questions? Why is it that the software needs to use Both Apache and nginx? Yeah, I get that question a lot Thank you So, to be honest, you know, it's not quite clear to me, myself Why specifically that's there For me, at least I can give a justification For why I'm supporting that It's mostly because that's how, like, upstream Has it? A lot of their tooling is kind of built around using Apache And I think when they kind of specifically look at nginx They kind of see it as more of a reverse proxy Than, say, like a web server And so the reason why I guess they have both Is that, like, Apache? You know, that's the main web server component And then they have, like, the nginx stage With, like, the custom plugins that they need built in And then that way, you know, they're not cross-conflicting With, like, oh, say, like, a normal nginx instance That's, like, well-optimized Or just being a web server And then say, like, their custom, you know, modified version That's, you know, specifically built for launching Both Node.js, Python, and Ruby apps So that's kind of why they have both And then they also have a lot of utilities That are built around supporting Apache Any other questions? Well, then maybe have a round of applause For Jason, thanks a lot Thank you, thank you Well, thank you for that We have a short break again And then we come back at the hour For the next talk is a good one also Also in this area Yes, high-performance open stack That's right, okay So we'll see you back at the top of the hour See you back in about 13 minutes We'll talk to people at places like Oak Ridge National Laboratory About things that they do and use this for As, you know, getting a connection between yourself And the end users is, I would think, very important Yes, yes, so we have talks quite a bit With some institutions So part of our HBC community We have a lot of folks in there So we have someone from AMD who hangs out with us He's, like, responsible for all their GPU Driver packaging for Debian and Ubuntu And we have a lot of other folks too Like our OmniVector They have a lot of customer use cases They bring to us, okay And so we're actively seeking those sponsorships And then I also do go to a few HBC conferences And I'm sure it's going to mean like the A lot of the high-performance computing work Goes on systems that have Yeah Like you'll have programs that were originally written 50 years ago Yes, yes, yeah They're still in use because A zillion papers have written Yeah, yeah And they do the right thing Yeah So they don't want to rewrite that over again Yeah, yeah So that's definitely something we've Talked about But I've never seen that Okay Yeah So, yeah And it's definitely like open on demand It's not like, oh, you're going to rewrite Your everything in like a common language The idea is like, oh, you know You can access that 50-year-old program Through the web rather than having to use the terminal Yeah Yeah, because the system that I know the most about Is what's called X-ray crystallography It's, for example, what you Was used to do things like figure out the structure of COVID-19 Yeah, yeah, yeah Things like that And you would be astonished as to how many steps There are in the process of doing that Like you have You start out with Biologists who are Graduate students spending three years Figuring out that crystallize one particular molecule Because you have to take the Crystallized form of the molecule and put it in an X-ray beam And produce a lot of patterns of X-ray spots And that's the part where I am I'm running the equipment that does that Yeah And then you hand it over to biochemists To now run it through like 15 different programs Yeah I saw a flow chart once of all these various programs on the screen And it's a huge amount of stuff to marshal And show you If you go into that community you may get resistance because of people who say that But we already have programs to know how to marshal all of it Yeah, yeah, yeah And so the idea is that like we're providing the I guess I would look at it as like we're helping provide the infrastructure For running those applications So it's not like we're trying to replace core components The idea is like we're trying to make it easier for a lot of these You know teams like these I guess multi-discipline teams Where if you have the biologist then you have like the software engineers who write the code and all that We want to make it easy for them to just have a platform that they can all work on Both those applications that they have But there are like places along the way where There are proprietary programs written on Closed source stuff that can be The people in the question can be very cantankerous to interact with Yes, yes It's sort of like if you hit them the wrong way on the wrong day That they may refuse to want to do anything with you Yeah Like there's this one particular one that I won't mention But uh at least a dozen other people have tried over the years to write the places code and failed Yeah, yeah, yeah, I think I mean that's always going to be a challenge Where it's like you have that self-determined bit for like some folks just don't want to work with you And it's like you got to you got to take your wins and losses when you can So it's like you know ideally like we try and have an upstream first policy With Ubuntu HPC and so like you know we've gone this back we've done some work with them to like Get their snaps and like they recently adopted an upstream We've also done some work around command they're very interested Doing that Ubuntu support because they've traditionally been like CentOS7 So that kind of helps so like yeah we do have to be talk to them like Not not everyone want to deal with us some folks are like well You know everything was great with scientific Linux and all that so Well uh also you also have to deal with the fact that there's large chunks of that community Who are now very wedded to red hat enterprise linux Yes, yes, they are that's that's the other issue So that's one market we're trying to penetrate So we're hoping that like as folks see that like oh, yeah, look like this is a nice platform and whatnot And also like with containers as well Yeah, well some and some of the partner vendors and stuff only want to handle maybe Enough you don't even need all the fingers at one hand to list with platforms or willing to support Yeah, yeah, yeah Yeah, so that's something to and that's also not a thing that we're working on Of course, it's like going to these proprietary software vendors. We're like, okay You use only like red hat like a but you know, what will it take to get you to also serve by Ubuntu? And a lot of the time it's a showing command from users We want to use those applications on Ubuntu and then also sometimes just providing the technical support Some of these decisions are made by people in probably high locations in the hierarchy. Yeah. Yeah, you know people who Do you actually know how extra crystallography works? Yeah So I wish you well, you know chosen a tall mountain decline. Yes, I have Thank you though. Thanks for the questions. I appreciate it. Okay. Yeah, well, he's getting set up for the talk I have a Yeah, what is it? So You don't use the devian rules file. No, here's the thing. So Well, I don't think it's directly how it worked is everything in the rules file the entire make file for the entire package that That was the base Recent years has been made a lot simpler by dead public Right, um, so dead public actually has a bunch of scripts that overwrite I don't know if it's a large part of the scan that it needs to be Ah, yeah, it's fine, it's fine. Really, you're looking at dead public, I think it holds Yeah, yeah, I know they Oh Okay, yep Okay, everyone. We're ready for our next talk called OpenStack for high performance workloads And our presenter is Felipe Perez Hello, everyone So OpenStack for high performance workloads So I'm Felipe, I'm software engineer at Canonical. I I've been charm OpenStack DL at the moment OpenStack contributor for many years by now I've contributed not just to charm OpenStack But some more upstream OpenStack projects like Nova and Magnum And The purpose of this presentation is to try to give you the idea that running Software in a private cloud doesn't have to be slow typically when you're running Programs in a in a shared environment You are going to be facing challenges of contention resources and typically that gives you a bad experience But OpenStack provides you a bunch of ways so you can mitigate those And eventually depending on if you have enough hardware resources, you're going to be having a pretty good experience So We're going to go through four four aspects. What is OpenStack and why it matters some of the components that compose on OpenStack clouds What high performance Characteristics have and what metrics we care about and we're going to not going to go too deep on this It's it's an representation on its own, but we're going to take a quick look at what metrics are relevant here And finally what configuration options OpenStack have to to deal with all these aspects So what is OpenStack? Set of components that provide common services for a cloud infrastructure That that sounds pretty big and if you if you take a look to the high overview This is this is what you're dealing with an api Rest api so you can get bare metal servers virtual machines or in OpenStack terms. They are called instances And containers you can configure us also to get Containers and then you have a short networking and storage And there are a bunch of techniques to mitigate All these aspects so you you don't have to compete for them Now if we if we open this this nice bug that looks pretty simple You will start getting into these like it can explode very easily and you start finding a bunch of things like The message queue that allows to have long running tasks and have things on the back end Now this is an incomplete picture. It's part of the installation guide It has some components that in the case of charm OpenStack We don't have like drove and and saara But also it's missing some other that are newer and very relevant for For OpenStack clouds like Octavia Octavia is that the one that provides you load balancer as a service And and it's pretty critical when you're deploying coordinates on top of OpenStack So now From all these very large picture when we're going to just speak three components And the ones that are more relevant for what we're dealing with here Now nova nova is the one that typically you're going to be interacting with is the one you are going to be requesting your The creation of your instances. It's the one you are going to be Interacting with other services on your behalf because when you are Requesting the creation of of an instance, you're going to have You're going to need networking so you can access it, but you're also going to need some storage And and nova is capable of doing all this on your behalf So it's important to to mention that nova itself itself is not a hypervisor It's just a control plane in the back and it's going to hand off all the specific Virtualization to the driver in in our case is going to be libvirtan kvm It's it's the one that we use by default and the one that it has most of the features we We have interest on Then you have cinder cinder is in taking care of creating the volumes In our case, we use f by default, but you can use other or stuff like lvm nfs There are a bunch of drivers so you can have a specific network Network appliances that you may have already in your center so you can take advantage of those In the case of charm openness that we also have we have support for pure storage nfs and on some others So you can take a look at the documentation, but Seth is the is the good one Then you have neutron And neutron is the one that is going to provide you all the overlay networking So you can have your own IP IP space you can have your own private network in top of your cloud And it's going to allow your services to talk to each other no matter which no they're running on And and again nova is the one that is going to be creating ports for you It's going to be attaching those ports to your bm's and you don't have you don't have to talk to Neutron directly But depending on how sophisticated the network topology that you can define you may want to create your own subnet Your own ports if you want Instances with multiple ports you you have to do all of that on your own Nova is going to give you like like the typical bm one one nick one volume to store your root file system And and that's it um Also with neutron you can model The gateways and and that way you you can configure how how your network is going to how your traffic is going to flow on your network So We have this this is the control plane and now we what we need is uh To understand what metrics we want to to monitor because when we're going to be optimizing We're going to be caring about the specific things not everyone cares about everything. It's just about pros and cons Now when it comes to something that you want to perform well Depending on who you are asking to he's going to be Thinking on memory someone may be thinking about IOPS input output transactions per second Some people are going to be thinking about throughput Some others are going to be cpu bound problems and some others about latency in case you have a service That you are expecting to serve those requests under a given threshold Latency is going to be crucial All of them all of these metrics get interrelated. So every time you are pushing To have cpu optimizations, you may be affecting some others So so you need to be make measuring those metrics and and understand how how they are being Managed now. This is a it's a very well known diagram Uh, this one is from Brandon Greg. He's a well known person having given multiple presentations around performance and observability tools and this one gives you an idea of Depending on what you want to take a look into in the in the system what tools you may have available So if it's an application if it's sockets tcp connections and so on So, um, I really recommend visiting Brandon Greg's website. It has there are many presentations He has a book on the topic and all of that now open stack configurations, so One of the the most easy to set up is host aggregates. So You may have a fleet of nodes And those those nodes may have different properties And in in this case, we're gonna we are creating a specific That is For vm for for nodes that have ssd So we want to create a flavor and the flavor is the one that is going to define What characteristics the the instances you're creating will have And so you create the the aggregate and and you will be You will be setting a property in this case ssd equals true and After that, you're gonna be start adding all the different nodes that Have this property. This is all on the on the administrator side not Not a regular user of an open stack cloud is gonna have access to those because this is very intimate on on your hardware Then when you create the flavor, you're gonna be hinting that this flavor Can only be used in When when this condition is true this ssd equals true Now this allows the The novice scheduler the one that is gonna look for an available hypervisor to decide where your instance should be created on And and this way someone who is using your cloud and is seeing oh, I have the The flavor ssd.large is gonna have 8 gigs of ram is gonna be a disc size of 80 gigs But also it's gonna have the guarantee that this disc is gonna be created on a on an ssd And and that that way you can manage your your different needs now All these you can do it in this case These are just flags so you can do the same for gpu's you can do for any any hardware aspects that you may care about Also, if you have different cpu architectures, you care about differentiating about amd apx or In contrast to xeon for for any reason that you may be It's relevant to your kind of workloads You can use any of these Now the problem with this is that anyone can use this So anyone who has access to create an instance based on this flavor is going to be taking advantage of of these features But there are many cases where you want to segregate An specific set of hypervisors for a specific project because The budget was given to that project. So they they purchase those those nodes And in those cases you can also create host aggregates, but in this case, you're filtering by project And and that will give you the guarantee that Every every instance that this project in this case, we're addressing it by id There are the instances are going to be landing on those that set of hypervisors That way you are no longer competing with with the resources by and for with another project And and if your project is something about research, so those those nodes are are really beefy beefy machines You are going to be Having a better experience your your instances are going to be in the end running much better so Then you're going to start Optimizing by cpu So open stack Has a series of configuration options and some of them are available on the command line over the rest api While some others are only available at the at the nova.com file now When you are defining your Your instances in the beginning we were saying, okay, this this instance has eight virtual cpu's and and that was it But then if your workload is more specific and you want to model a specific kind of kind of machines, you're going to start saying Okay, this the I want A vm that is going to have this number of sockets Each socket is going to have this number of cpu's and Sorry this number of socket this Each socket is going to have this number of cores And then each one of those cores is going to have this this number of threads Now this allows you to for more hardware aware programs To run in a specific way again This is very intimate on on what you are trying to achieve Because then how this maps back to the hardware to the hypervisor. It's going to be relevant because If you're if you're putting too many socket cpu sockets in the for virtual cpu's you may have effectively Reduction in your performance because that doesn't map really well to the hypervax or actual hardware So for instance here, we have a flavor that is going to be defining Two sockets each socket is going to have four cores and Each core is going to have two threads So if we launch a vm based on this flavor Effectively when we're sshing into it and we run a less cpu you're going to be seeing that this machine has 16 16 cpu's right and if you start looking into how this is laid out in the In the in the machine is it's going to have this cpu architect topology now That this is this is all fine. So we can define Many sockets many cores many threads as we want But we still have the problem that There is No way to guarantee how this will be running in the hypervisor The the scheduler is going to be moving all these different processes around and you may be having Still like a bad experience from the performance perspective So then you're going to start defining Stricter ways to how these different different virtual cpu's are going to be allowed to run so When you have the the cpu policy you can start saying Okay, the default is shared right? So the the cpu's processes can be can be floating across all the host cores now If you have A dedicated set You're going to have that each virtual cpu thread At the hypervisor is going to be allocated to a specific core in the in that hypervisor So you're going to have mapping one to one And and now it's when you are starting the benefits of not Overcommitting your cpu's now you have that those virtual cpu's are actually running on a specific core So they are going to be running More smoothly And the thread policy allows you to decide When those virtual cpu's are allowed to run in a in a cpu in a physical cpu that if it has A hyper threading enabled or not because when you have two threads running on the same core There are still some some memory shared there And depending on what you're trying to do that may not be A good idea. It can have privacy concerns for the workload or performance concerns Now Then you have you have numma and and you can also model these in the at the flavor level level Now numma is what allows It's a way where the memory is not really accessed equally depending on where your task in the kernel is Really running The way the way it looks is this way So you have that there really the the memory ram is associated to a given socket And so if your program is running here in this socket, you don't see makers So if if it's running in this core for instance, and it's trying to access memory pages that are located in this section The the timing the latency is going to be larger So again, depending on the on your workload and how it's manipulating all this it's going to be relevant so Open stack is going to allow you to define how many How many numma nodes your virtual main your virtual machines once you have By default, it's only one and it's going to be floating around But you're going to start making more specific things like so I want I want Two numma nodes and each numma node if it's going to it's going to be composed of of this number of cpu's And then each cpu a given cpu is going to have access to a portion of the memory So then we can create things like Let me find my cursor here So You can create A symmetric definitions so In this case we're defining two numma nodes and then we are saying that for the for the cpu zero We're gonna for the for the node zero of the of the numma We're gonna be assigning zero and one cpu's and we're gonna be assigning two jigs of memory Then in the other in the other numma numma node We're going to be assigning one Two three four and five and we're going to be assigning four jigs of memory That way a program that is aware of all this characteristic is going to be able to manage that memory And and take advantage of this setup And this is going to be being translated by the scheduler And when the vm is defined at the livery level it's going to be Giving you a good representation So the hardware can be taken advantage advantage of now This is this is all great We're we're all we're trying to map these definitions that are abstract that abstract in the flavor and how all those map into livery But we still have the problem Where the kernel at the hyper basal level is going to be trying to assign a bunch of other administrative tasks And in in in course that we may want to be really using for virtualization And in those cases the recommendation is to start using isle cpu's So this way you can in this case in this example here isle cpu's is a It's a kernel command line option. You are going to be adding it into your graph configuration So this way in in isle cpu's equals zero dash 31 You are saying from from the cpu zero until the 31 Don't scale scale anything there. So the kernel is not going to be considering those To to sign in any task. So anything running in the hyper basal It's going to be only running from the 32 and above And this gives you the guarantees that any vm's using from the zero and 31 Are really running Along there and they are not competing for resources They are not going to be being being post to run other Administrative things that the hyper basal may be trying to do all right, so That's all what I have for today Anyone has any questions Well, first of all, great talk really like the content. I'm quite relevant to what I do But I guess you know one question that I potentially had is like You know, I guess You know here what you showed is that it's like possible to you know, tune your open stack like your private cloud To get the you know best performance possible But I guess what I'm interested to know is like, how do you maybe like improve the messaging the documentation? How do you make people like aware of like how to Do this performance in like an easy way so that they know That like oh, this is what I need to do because I feel like a common problem is potentially that You know folks just like follow a basic tutorial once it's deployed It's like oh, I'm good to go I don't need to do anything else and then when they go run their workloads or something they're like Oh, this is you know, really slow. This is crap, but that's just because they you know Potentially didn't know how to configure it Yeah, that that's true. The documentation Makes a lot of assumptions like for instance if you go to the nova documentation and how to configure noma And how to define flamers flavors that take advantage of It's not going to go up you through on why someone may really want to do that There is Baking assumption that someone really knows what it's doing and And then you have that As I was saying at the beginning of the presentation Every time you're taking one of one of these decisions to push into a given direction to to take advantage of your hardware You are putting In a different position when it comes to launching bm So the reason why many of these are not defaults It's because every time you put an extra constraint in the flavor definition The scheduler is going to have to work harder to find you in a spot in the in your fleet of nodes And and depending the number of bm's you may have you you and how large your your set of hypervisor is You're going to have More times where the scheduler is really not finding a good place to put your bm is just failing And give you an error. So How to make that accessible? Probably try to at least give you pointers on when where Where to read more about certain things like numma how that affects to the programs that are running there, I think Some of those links could be made from the Nova documentation to leap it and chemo, which is the one that probably explains Much better and more depth how all this lays out back in the hypervisor All right, thank you I know what you're saying because a lot of times when you go there you just assume default works Well, I do default works Well for me, but I think providing use cases as examples Maybe that will help the users because if you have to go rate a lot of men on this and that There's so many options, right? And we usually try to just try quickly get the work done I mean, you know, I'm working on user story. I have a time the sprint, you know This is not something you can spend a lot of time on but I think I like use cases if you can just list Well then everybody does something different, right? I'm just saying some of the common uses and give example what would be the The parameters, you know, you put it in there or whatnot and I think that will be helpful But that require work from the developers like you, right? Oh, I had to think about what would most people But that's what I would recommend because there's so many different options, right? How do you even You worked out for a long time, right? So, you know, what's the optimum thing to do? But most of us I don't know about most of it for People like me a lot of times I want something quick, right? I may not have the time to get into details But having seen some use cases that will help me to determine What should I use, right? So anyway, that's just my suggestion Or you can get really details on multiple use cases too, right? I I agree with the sentiment behind that idea The tricky part is that many times people take those examples If they were golden rules that they can just copy and paste without really understanding so I remember a good example here where someone was applying Was dropping the file system cache and they were running it in a cron job So they were trashing away the performance of the page catching And and they didn't really have a reason It was something that they just copy and paste from the internet because they said the internet said that this was To improve the performance, but it it was under very specific characteristics And if you don't you don't understand the characteristic of the thing you're running It's better to stay in the in the default values So real quick I saw you using like an m1 as an example up there If I were to like compare this to say like a like an m1x large or like any sort of aws instance Have you guys done a comparison of like? Okay, this is an m1. This is an m2. Sorry. This is like an m4 1x large m4 2x large or whatever aws or Your favorite cloud provider instance types And then said, okay, this is what they're doing. This is what we're doing Like get a comparison of performance or anything like that so I could start to understand like The like the scheduling algorithms you're using or the just the assignment algorithms the like how how things are happening under the hood And as a cross-comparison to other hypervisors and other ways of doing isolation And specifically We haven't ever done that because every every cloud is choosing every private cloud based on open stack is choosing different set of hardware Because the hardware has been changing over time Many times they they buy a set of nodes In one point in time and then when you're later, they're getting a different set of hardware because they want to expand So so it's really difficult to say If you apply this set of settings, you're going to be an equivalent experience to aws Flavor x Because in the end all this depends on the hardware you're running more than which setup you have Fair and I guess the other question I have is when ec2 first started It was a really big problem with noisy neighbors Like on the same instance Do you got how did how did you tackle the problem of noisy neighbors? In in what you're doing here This way this way Basically you start you you need to start Isolating cpu's you need to start Stop over committing the cpu in many times because that's what hurt you the most Even if you are not really Tuning all these aspects If you say that you're one to one in terms of virtual cpu's and actual cpu in physical cpu's By default you're going to get a much better experience But in reality many private clouds They want to overcome it because they don't have the budget and they want to get the idea. They have a large capacity any other question I'm sorry. What does noisy neighbor mean in this case? Do you see a decoration in performance or what in terms of you're talking about aws, right? I just don't understand that Right where everything is on print for my organization. So I don't know what that means noisy neighbors I see I get it now. All right Okay, thanks I was a good clarification because we all have questions like that Any other questions Here, thank you very much for that. Thank you. We have We have another short break again. So stretch your legs and we'll come back for our final talk of today on ai is yes So that'll be pretty great here at five All right. So for our last talk for today We are very very very lucky to have an expert here who uh is um Uh has a really really cool talk for us. It's called learning language learning models from zero to hero and um So it is my pleasure to introduce Andrea Thank you, and I hope you can hear me. Thank you everyone for staying until so late I know by now we're all thinking of beers and having dinner and all these things So I'll try to to make it interesting for everyone To start by introducing myself. I'm Andrea Montenu. I work with canonical. I'm an ai product manager I have a background in data science and machine learning, especially in retail and telcos But at some point because I was too frustrated to deal with all the tools I Changed places. I joined the product team in order to Build solutions that are actually easy for data scientists machine learning engineers This talk is going to be introductory and is going to talk about language models As well as how you can benefit from open source. I'm I'm a big believer in open source. I love communities that leverage open source Um, so let's not make it any longer. So I'll tell you how it all started It was three years ago tragedy popped up I was about to go on holidays and I was telling one of my co-workers I really want to go to the mildeeves. I love diving She said fine. Andrea. You can go just do it Okay, okay, that's gen ai, but that's how I got introduced to large language models And if you're just getting started in this industry, I think there is a lot of confusion between gen ai llms And that's all right But today we'll focus on large language models and for those who are not familiar with it They refer to models that use deep learning techniques to capture complex patterns To produce text they usually train on self supervised learning And they are beyond the scene of large transformers Well, let's look a bit more in depth at them because Large language models are large. They are trained on data sets with many parameters gpt3 Which was the one that I looked at when I prepared this stock A couple of months ago now had more than 100 billion parameters And let me tell you something gpt4 it's at least double the size and it has more parameters It has also been trained gpt3 just to make clear on 45 terabytes of text And that's why it's so Capable it has the language part because it mainly operates on human language And then of course it's it has the Models part because they are used to find models or make predictions within the data Look whereas we all look at llms nowadays or in the last two years, whatever We are not really at the beginning of this journey I remember when I started with machine learning It was 10 years ago and everyone was telling me Andrea you're doing unicorns But machine learning was also around for quite some time. I think it's the same with llms They became famous a couple of years ago two to three years, but the truth is that around the The in the Or towards the second part of the 900s people started looking at idea or exploring the idea of llms by building basic rules As you may imagine that didn't fly out very well because it was too manual. It was not adaptable. It was not scalable. So they dropped it In the 2020 it's the year that I think large language models Became very popular and nowadays you have plenty of options that we will talk about in a second But then who can use large language models? Do you have a use case? Just raise your hand if you know of a use case that could benefit two three four people Five don't be shy Jason has one as well So there are plenty of use cases. Some of them are very familiar We all use chad gpt right and child bots are familiar But then the applications go beyond it. You can use it for tax generation story writing marketing content You can use it for summarization. I don't know how many of you stay in long meetings I do I have six hours meetings a day So if someone can summarize my meeting notes, I'm very happy for that Translations as well. It's very important and it's useful, especially in environments Where English is not your native language or you struggle with it. English is not my native language And if I'm tired, I might switch to Romanian. I'm not just like that anymore. So we are fine And then last but not least classification for example in marketing again There are a lot of sentiment analysis that's being done and llms are important One thing that I want to say here is that whereas large-language models have applications across all industries from Having chatbots in retail who tell you what are the latest offers to sentiment or to To telcos companies who are looking at how you talk about the latest offering to see if they keep it or not You should always look at the use case. Don't use large-language models just for the sake of it That's not how you should do it. You should start with a problem to solve What does it bother you every day? Does it bother you that you have to go through 20 Pages of meeting notes? Okay, let's summarize them. That's great But if you just do it for for the sake of doing it It might be fun, but it might not give you results that you're very happy with That's where it comes it becomes interesting large-language models versus traditional machine learning I already told you I started with machine learning when it was not that nice I was looking and working especially on structured data And that's a question that I get often at conferences as well as with customers. I Through the through my role I get the chance to meet a lot of companies and Most of them say they want to use large-language models or they want to do gen AI, but they have no use case So I say why not traditional machine learning and the question is what's the difference? While lm's are specialized in nlp or natural language processing and they're designed To handle structured data, but however, they are not specialized in image analysis or ranking structure data tasks, for example And there are other machine learning algorithms and models such as linear regression that are more suitable so The main question is what do you choose lm's or traditional machine learning? It all depends on your use case. It's very easy However, there are some benefits that lm's might have and you have to bear them in mind On one hand, you get improved performance nowadays. They they they do get better results on nlp specific tasks And often they surpass the traditional approaches that have been on the market They accelerate the learning so often you can use them for your use case with Leader training or leader optimization to no optimization and last but not least they have multilingual capabilities in canonical I think it was six months ago We were playing with some of our colleagues and we managed to build a chat bot that was answering in african The reason why we tried african is because we're south african and Mark was curious about it. It answered quite correctly And I said, okay, I'll try it in romanian as well It answered in romanian as well to the point where I I started using characters that are not in the Normal english language in the same area. They are nowadays lm specialized in unusual languages such as Urdu, it's something that picked up very quickly It was it's an lm that was developed. It's open source. It was developed in UAE by Pakistani Startup and I know about them just because I'm based there How do you take This lm's to production or how do you actually Move away from just the idea of ah, it exists so I can use it it with ml ops. Are you familiar with the concept of ml ops? Let's see some people. Yes Okay, I I still define it as DevOps for machine learning and Those who have been in the ml ops phase for a long time will feel like killing me But that's the shortest definition. It's it's a new practice or it's not that new anymore and it aims to Take machine learning projects to production in a scalable and repeatable manner But the truth is that I get a lot of complaints Training is resource heavy. I need a lot of compute power. It's expensive Yes, I know you're right But then that's why you can use pre-trained models such as olama for example Or lma 2 they are trained. You need to optimize them, but that's much cheaper It requires less resources nowadays you also have the public cloud for example that can give you the access to more powerful compute machines And to to maybe put it in context when we train when we've built our own chatbot That will share a bit later The architecture it was costing us 50 cents for each half an hour So it's okay. If you just want to play with it. It's not very expensive At the same time you should look at capabilities such as or features such as gpu sharing tools such as volcano Which is open source are very great for that as well as The idea of distributed training and using frameworks such as paddle paddle Costs are high that's what company styles us. It's too expensive. We don't want to do it We don't want to train our own model Again, you don't have to train your model. You can just optimize your model However, there are also things that you can look at And the most important part is that you can use open source tooling it enables you to Get started at a lower cost and then as you scale Of course your cost or you can invest more in it And it's very important to mention here that unlike any other innovation that I've seen on the market AI is Fully open source or is naturally open source There are big communities around it and the latest tools latest innovations. They happen in open source communities and in an open source space Also, you should always think of your long-term strategy and think of hybrid clouds You start on a public cloud as I told you because you don't have enough resources But then once you want to scale you can always move on prem You can always consider hpc clusters that jason's talked about and there's nothing to To stop you from that You start on one cloud you move towards a hybrid cloud scenario or a multi cloud scenario depending on your needs where you store your data But that's not all It's not reproducible. You know, I have a team of 20 data scientists They all do the same thing They they get the result. They don't know how to get back to it. I think A lot of data science still happens in small Black boxes that people don't know about and that's when you should think of open source tools again I feel like i'm a broken machine saying open source open source A million times, but I think that's the answer in the ispace Tools such as skip flow or ml flow enable reproducibility on one hand you can do you can Have capability such as experiment tracking Which is very easy, but then you can also build pipelines automate your work in order to make it seamless for your teams That's what i've done at home. I stopped coding at some point, but I still enjoy doing Fun things And it was really after played with ched gpt. I said, can I build my own chatbot? It's easy When you when you look at it it all starts with a user who queries something That query goes into the embeddings model and then it goes into the open search Which is basically a vector database where all your prompts are stored But that it's not all from there the data or the search for the similar searches are retrieved And they going through the open source model you get an answer Of course not all the answers are good or depending on How niche your topic is if you if you ask about something that is not really available on the internet I'm not sure if that's possible Then the answer might not be accurate, but that's When you look at this That's when you look at the more real world example That's designed not just to be your pet projects, but something that people can use And there are a couple of more capabilities that you need to ensure on one hand that you Find tune or you optimize your open source or your LLM with data that you have assuming you want to build a chatbot about your Offerings in a telco company. Maybe it's not the best example But then you should use that data that you have about your offerings about what customers usually ask to optimize an existing model At the same time and then you should also do it in an automated reliable manner I don't know it updates once a month with the latest data, but then also You should ensure that similarly all your data Is stored in a vector database all your searches are and prompts are stored in a vector vector database And then when the user queries The answer gets similarly to to the other one so LLM's projects can be done with open source everything that I have here is open source I've done it on my own. I didn't pay much On one hand you have open source models and I think that's what's Interesting for those who are familiar with the ml word probably they are also familiar with hugging faces That's where you're going to find a lot of models but then Not all of them are lms, but then there are also open source tools that can be used to build your own models to find you an existing models And open source Sorry There we are Open source language models. I can give you easily some examples llama to Mistral these are the ones that I think are falcon as well Are the most used ones have some benefits. It's easy to get started. There is nothing to to stop you I had I said earlier in the day that olama is also snapped But I think we have some technical difficulties here I don't want to say it you can try it out, but you might run into some problems But it's also cost efficient. You don't need to train your model training a model on billions of Parameters can be expensive not everyone can afford it I couldn't and also I would want to invest my money in that even if I would have the money because I would rather go on a holiday than train models and buy resources It also gives you flexibility because I'm a hundred percent sure that there is more data available on the internet than you have in your own Machine and then it benefits from community support. I Was not an open source enthusiast before I got into the machine learning world But the truth is that in the ml space there is really an active community that you can ask that's very passionate And then there's also a matter of co-transparency You can always see what goes in what goes out at the same time there are concerns I'm not going to say that everything is pink and and great because often there are questions about privacy And especially highly regulated industries Have some concerns about it. I can easily think of healthcare, for example That's why they've been looking at confidential computing as an alternative to to further optimize their models But then there's also lack of support and lack of enterprise support Especially which can be challenging and then the learning curve is not that easy your first three months in the machine learning world are going to be Very very frustrating. I would say I don't want to point anyone. I don't want to scare anyone, but it's annoying It's just a completely new word It's a lot of data sometimes at least when I started it was also there was not so much compute power which I would start training something it dies halfway through. How do I do it? How do I debug it? It can be Annoying and then there are some security risks as well Which are really related to the vulnerabilities that the packages that you use can benefit There you go. I knew I had a slide with this It's also interesting to look at langchain. It's not a language model per se. It's a framework, but it's widely adopted in this space Tools, do we know open source tools that we are using in the machine learning world? Come on There is one that everyone uses I'm sure No Jupyter notebooks now Okay, yes, I think it all starts with linux and Ubuntu Then it goes towards Jupyter notebooks Python, PyTorch, there there are plenty of programming languages and tools And again, it's easy to get started Which I like it you I know it's easy to deploy Jupyter notebooks. Everyone can do it. I guess or If they don't do it again, they look on the community Forums and they will find the answers even if it fails. It's going to fail a couple of times, but in the end it works It enables customization. I'm not sure how much or if you've tried tools that are already made such as The public cloud solutions, but then they are very rigid They they give you a solution that works well But if you have different needs or if you want to try something new, especially if you're in an exploration phase They are not that flexible and then it also Is not something that in the machine learning world we talk that much But open source tools give you the ability to contribute And especially if you're just getting started or you You like contributing you you like making the world a better place You really can do it and projects such as kubeflow, which I'm active in Really appreciate contributors By contributions, I have to say I don't really think only of code I'm one of those that think that code is not the only way to contribute to a project You can contribute by writing documentation. You can contribute by providing feedback You can contribute by trying the product and building a nice tutorial There are plenty of ways to do it. Don't just think of code Because especially if you want to do data science, it's maybe unlikely that you will Want to contribute a lot to how the project is built, but you can provide a lot of feedback That is very valuable That's how I started in the kubeflow community. I tried to deploy it and I couldn't I failed dramatically So I went in the community chat and I said hey This documentation is outdated. I don't know why it doesn't work In the end, I think we've learned why and we also improved it There are of course some downsides Security lack of support similar documentation. I just told you what happens, right? Jason told you what happens with the documentation He thinks that the code is the best documentation, but not everyone knows how to read the code. So documentation is important But also bug fixing whereas you can suggest bug fixes it might take a bit longer I asked you about tools. Well, when it comes to language models and tools that can be used I'll take them on by one kubeflow is an end-to-end mlops platform. It was a project that was started by google Five six years ago. It's quite some time now. It's part of the cloud foundation cloud native foundation It's a suite of tools the way that it was designed it was to have to be a suite of Famous or leading open source tools that can enhance the machine learning project So it has jupyter notebooks integrated For the training part, especially it has katib for model optimization. It has case serve for model serving Um, and then it has kubeflow pipelines, which I think are the heart of kubeflow And they're really used as mlops pipelines to automate machine learning workloads Then you have ml flow which is used for experiment tracking especially and model registry it's I think the most Famous machine learning tool on the market It's super easy to use it. It's the thing that whenever you get started. It's easy to install it. It's very intuitive as well. So if you're just Looking into something jupyter notebooks plus ml flow It's the way to go then you have spark for data streaming and last but not least open search, which Historically is famous for as being a no sequel database, but it can be used as a vector database And any project that uses llm's needs a vector database for storing its prompts Then when it comes to canonical and what we do I I do love it We do have a solution that includes these tools and a couple of others And they are used really to fine tune or what you can do is fine tune models Benefit from the all these components being pre-integrated Upgrading easily easily and easier as well updating But I move here What do you do when you want to scale your project? First thing the most important one start with a use case That's how you get started. You have a problem to solve Don't think that you scale something that you're just doing for fun because It can work or it cannot work Think of your users whereas now you're likely to work on your own on your machine In two years from now, it's like that you want to have a team of data scientists and then you want them to be secured to to Have clear isolation between their pipelines, for example, and so on Think of how to optimize your resources There are some reports that around 70 percent of the GPUs are underutilized And that's sad. It's sad and unsustainable. You also have to bear in mind that it takes six months nowadays to get the GPU So what do you do? Well use and look for resource schedulers There are many open source options for that and then it really enables you to on one hand even get started With maybe resources that you have in place But then also to optimize it in the long term Look at the reproducibility of your work as well as portability If you start especially if you start in a public cloud and you might want to migrate being able to To enable this will help you Look at the monitoring I didn't talk much about it But you should have and you should enable model monitoring data monitoring as well Data drift monitoring because otherwise it might be risky. You might have some malicious friends who try to to play negatively with your models and last but not least Look at the security and compliance There are plenty of packages that are being used in an animation learning project from python to numpy to pandas to there are many of them and Many of them have vulnerabilities. There are more and more attacks There was last year. There was a famous one called shell torch or Yeah, I think it was shell torch called or it was it was against a Pie torch and it it really harmed a lot of organizations Whereas when you have fun, it's okay. Especially you want to implement it in your organization Look at the security and compliance requirements And all in all have a long-term vision for your machine learning project Either if it's with large language models Either if it uses gen ai or it's just a traditional machine learning project, which doesn't look that interesting nowadays I think it's still very useful So Just to recap you should get started with the use case using open source tools Try to be able or to enable reproducibility of your experiments and try to view solutions that run on Any environment whether it's public or private cloud and then you should always scale with integrated solutions And once you start scaling look also for enterprise support And that was me That's where you have my contacts tomorrow jason. That's our boot number or the day after Tomorrow i'll be around if you want to talk more about AI And I hope I stayed on time did I Yes Well, I know we have some questions Yeah Oh come on That No questions. It's okay. Do you get pushed back when you make such a compelling case about machine learning versus llms? Because there are some fanboys, right? I yes, sometimes I do get to push back because we we do see We do see people wanting to do the and to play with the latest and greatest tools on the market Which I'm happy with and I encourage at the same time. I don't think we should especially as an organization We should sell solutions That don't solve a problem So I usually when I push back I I try to come back and identify the use cases It always starts with a problem if there is no problem There should not be any machine learning project just because your friend does it across the street. You should not do it The takeaways I get as a lay person who does not work on either side of that is More cost effective more flexible with more focused results. Yes So as a business owner, I'm like I have fashion or I can run my business It's kind of what I'm getting out of this. Yes, it's probably unfair Maybe someone's going to argue but you make a compelling case. Thank you. I would also there is a question there But there is one more thing that I want to add which is related to sustainability We don't talk much and there are reports that started to come up But if you leave too many GPUs running in one place They become environmentally unfriendly and we should also bear that in mind and be very mindful and cautious of it I thank you for a presentation You mentioned open search a couple of times and I used to work a lot with elastic search open search as far as I know is just kind of a fork of that and I'm wondering How exactly it doesn't seem like machine learning or AI to me that's more like parsing tokenizing and It's Open search initially was not designed for machine learning use cases. You're right here And it is it started once elastic search went closed source It's used for storing prompt the prompts from prompt engineering on lm use cases and That's how it's being used Okay, my name is Nathan great presentation. Thank you And question is you touched on like that? I love that multilayered Venn diagram for machine learning and then like AI and then llms and all of that so You know with the kind of the new the new hotness that is llms and everything like that I'm seeing a lot of people being like, can't we use an llm for this when you'd probably want to use something like traditional classification You know linear regression or something like that, right? So how would you guide or how would you kind of solution those those conversations to be like? Hey, that's not the right thing for you in like the zero from hero to maybe you should be a hero in in your regression instead of you know gpt4 usually It's a bit of a dance there because again, most of the people are Attracted to work with the latest and greatest tools on the market. However usually I It goes back to their use case their problem I give examples of algorithms or similar success stories as well as timelines. I think when it comes to Traditional machine learning they can get better timelines because It's something that's been studying on studied on the market. So I think it's a lot of consultancy that takes when you want to convince someone. What's the right path to go But as long as you position yourself as the expert in the room and you have the right arguments Which do need preparation. It should be easy More questions. Yes You mentioned you built your chat bot there in the uh flow diagram. Do you have that set up as a script or how How can I set that up at home? Hi You're scripting in inkscape. That's how it can be done We do have the the code actually open source I can I can try to find it after that and I don't share it with I think natan probably Yes Okay, okay. Okay. Yes. We can give access to that At the same time you have to bear in mind that the chat bot was really for internal purposes. So Some of them some of the questions and some of the things can be Fun to to play with like we used to ask what is ubuntu pro and who's the Who's the founder of ubuntu and it answered correctly in many languages But then for some other things it was not very well optimized because we don't want to it was more of a Path project that we wanted to have but I'll share with natan the the code Where do you see this going in the next couple of years? Oh, I love this question I always said the future of AI is open source and I'm not saying this because I'm here That's what I truly believe I think in the long run is going to to be a shift in the in job roles We're going to have experts in different industries on AI projects Which is very important also I think that will seek more companies and more people running or taking their projects to production Rather than just having fun and playing around with which is going to Require a lot of upskilling Better security better monitoring are just a couple of things that I can think of Also, I do expect actually to have more sustainability concerns and It's interesting because I know that for example in the telco industry, which was also a very unsustainable industry They do have projects nowadays to turn on and off the cell towers depending on how they are used I think in the long run is going to be the same with machine learning and I'm excited. I know I'm curious And also, I think it's going to as a user. I think it's going to be In a way or another commoditized Like nowadays, we don't see that we get recommendations on amazon when we buy things, but it's a machine learning algorithm behind it I think we're going to see more and more applications in our lives that use machine learning and we won't be like, ah It's a it's an ml project, but rather It's normal in our lives You mentioned that step one was to have a use case What advice do you have for companies that want to apply lms, but don't have a use case yet? no, I If you don't have a you I mean it's a bit of an unpopular opinion But I think if you don't have a use case, you should not try it out But rather try to collaborate with Research institutions universities. That's where you find easy use cases as well as public sector organizations I think all organizations across the globe from Very developed countries to maybe less developed countries are looking to digitalize Different parts of their activity try to as if you have a company try to collaborate with them and identify Their use case and try to help them out. Don't just do it because you want to do it. I'm still against that Um, I work i'm a statistician or actually work in public health and most of our statistical model are used in machine learning and Uh, so with statistical model They're explainable because they're wrapped under the framework of hypothesis like linear regression is actually, uh A hypothesis framework for Correlation between predictors or features and outcomes kind deal With machine learning models such as My thesis is actually on trees decision based and all that stuff. Um, it's more empirical base and not Much into Theory in explaining Do you see in the future if large language model if There are going to be more research in How they work in terms of Yes, and there are already papers being published on on this topic. I think that explainability of Any model is going to be a big topic in the near in the upcoming years And the main reason for that is that as organizations start adopting and start building their own ml models They're going to be questions on the decisions that these models take if you think of a credit risk analysis use case in a financial services institution If i'm telling to a person you cannot get this credit You also need to explain why and the organization will be Eligible to all sorts of sanctions if they cannot explain why and in order to explain why they need to actually have an explainable model And I think that I mean, I know that there are a lot of of investments in this area A lot of papers are being published. I also think that it's still a topic that is not extremely mature At the same time open source models are more advanced on it than Black boxes that are completely closed source I believe we are at time. Is that correct? Yeah, so No doubt more questions, but let's have a round of applause. That was great. Thank you When your talk was submitted, I was excited because I wanted to see what canonicals up to in this space I didn't expect such a broad overview With enough details that we could all hook into presented so cleanly. It was a really great talk. Thank you Thank you One thing that I didn't say is that when it comes to these tools that I had in one of the slides They're all available by canonical You can just go on our website Deploy them provide our feedback You can find me on discourse and matrix and not just me but also the engineering teams behind it They love the community feedback more not more than I do but I interact with a lot of people. They don't interact that much So they are really excited when when they see feedback coming in if you try them out and have feedback Just reach out to us We are message away and some of us just a couple of miles away as well That was a great. Thank you. Um, that concludes day one of ubicon tomorrow. We kick off with With something very different from today because today we talked about a lot of big things We talked about big data and we talked about big projects and Um tomorrow we start the day With an artist's discussion of open source and how it can benefit arts in the community So that's how we're going to start at 10 o'clock here tomorrow The day has many great talks including a hands-on workshop of building apps for next cloud So we're going to have a This and more fun tomorrow. I'll see you guys then 10 o'clock here Thank you all for coming