 I was going to close it, but I think I'll leave the door open. There's not much noise out in the lobby. Thank you. A mask and glasses and over-the-ear microphone. It's a little complicated. All right. Thank you, everyone, for coming. Let's wait for that to... There we are. If you aren't in the right room, this is everything we've learned from three-ish years of funding open source. My name is Dwayne O'Brien. I am the director of open source at Indeed. I've been working at open source program offices for about nine years now at this point. And I get excited and animated when we get onto the subject of open source sustainability and in particular the flow of funding into the open source ecosystem. I'm also basically Professor Farnsworth. I love delivering good news. I spend as much time as I can in slippers, and I have a drawer with many different lengths of wire. My opinion here, even though I'm going to be talking about the work that we've done at Indeed, my opinions are my own just in case there's anything I say that you find controversial. And funding open source is the topic today, and I want to take a minute to pause and recognize that the flow of funding, not the whole solution for how to take on open source sustainability. We know that. There's lots of different facets to this conversation, but primarily this conversation today is going to be about the flow of funding. Just a sense for what we're going to cover today. I'm going to give a little bit of history on the program, and in particular I'm going to show how our budget has evolved over the last five years that I've been at Indeed. And then I'm going to break what we've learned into three different categories. I'm going to share what we've learned from engaging with foundations and funding foundations, what we've learned from funding projects, and what we've learned from funding people. And then I'll make some final observations about what I see happening in the broader open source ecosystem and so on. Three basic carry aways for today. The first shouldn't be a surprise, funding open source is incredibly complicated. You have to be prepared to really dig in to do the work, make some mistakes. Alternately, you can pay someone to remove the complexity for funding open source, but that comes with some drawbacks. The second is that we have to talk more about our funding decisions, as organizations who are sponsoring projects, as individuals who are involved in funding. The more we talk to each other, the better the decisions are that we can make. We make better decisions together. And finally, there's no substitute for doing the work. I mentioned that you can pay someone to remove the complexity when it comes to funding open source. That's an approach, but you're not learning very much in that process. And at the end of the day, someone still has to take money and turn it into time and turn that time into code. And it's much more efficient just to create the code and skip the funding cycle. Ready to dive in, some history and background. This is some snippets from the job posting that I responded to when I applied for the job and indeed. And if we look over the responsibilities that they were hoping to assign to their open source leader, they're all focused on helping projects, helping teams, very engineering focused role, developing software to help sort of build process. They wanted somebody deeply involved in the projects. When I applied for the role, one of the things I needed to do was put together a presentation that was my vision for the open source program office. And I opened with this idea that an open source program office is a position of service and that you're balancing this service between the company, the community, and the individual. And this resonated with the executive sponsor who ultimately brought me in because what I didn't know is that giving back to the community was a core design principle of what he wanted out of the open source program office. And he had only been thinking about giving back to open source through the engineering lens. How do we give projects back? How do we make our projects great? And opening up to different avenues of giving back to open source sort of clicked with him and that's how we ultimately came to terms. Now, interestingly, although this talk is primarily about funding, I also have this slide. You cannot easily spend your way to credibility. I believed it then and I believed it now. And I think a lot of times when we see organizations involved in this, we see big flashy appearances at conferences and then kind of nothing for a while, right? So when I said that you cannot easily spend your way to credibility, I followed it up with if you want to get noticed, you have to show up. And that's what I've been trying to do with Indeed's open source program office, help Indeed show up for our dependencies and new and interesting ways. And funding is the lever that I've had the best access to and it's the one that I've pulled the most. So I want to talk about how our budget has evolved. I'm going to give some hardish numbers in a way that I haven't before. Disclaimer ahead of time, numbers are just numbers. If you are in a small organization and you see a big organization's budget and feel intimidated, that's not the point, right? I want to share the model for how we kind of grew things over time. And this isn't a recommendation, this is just sharing how we did it. And I wanted to inspire you to think about ways that you can approach your own funding decisions. That said, I'm very proud of the work that we've been able to do here. And you'll see that come out as we talk. So some budget items that I didn't include. I didn't include conference stuff. I put an asterisk here because we sponsor the Python Software Foundation and sponsorship of the Python Software Foundation also includes sponsorship of PyCon and they're kind of mingled up. But in general, anything conference related, we don't have here. Sponsorships, booths, booths, build-outs, swag, coffee, stickers, that sort of thing. Internal promotions, if we had a summit, if we had speakers come over, if we made gifts for those speakers, if we traveled to summits, that time we threw an event and I said, I want tachos at the event and Catering said, what's a tacho? I have no idea what you're talking about, right? Software and licenses, compliance tooling, pilot programs, if we encountered a library that we needed to purchase a commercial license for or use the AGPL, that's not covered under this. And direct labor, no contractor costs, no team contributions, we didn't try to size any of that, no salaries, anything. So it really breaks down into four categories. Internships, people, projects, and foundations. So in 2018, I came in with a budget that was set at around $180,000. Most of this was allocated to foundations. The Apache Software Foundation, the Open Source Initiative, the Python Software Foundation, the Linux Foundation, the Cloud Aid of Computing Foundation, and Webpack, weirdly as a project that was in there, because an engineering leader indeed said, we should sponsor Webpack, and they said, okay, and they put it in the budget. After I came in, I was able to find a little money for outreach, so we did about three interns that year. When I set budget for 2019, this was the first year that we did the FOSS contributor fund. Quick show of hands if you know what that means. Two or three. So I'll spend just a brief amount of time on this, because I've talked about it a lot, but I didn't want to put a slide in here specifically about that. The FOSS contributor fund is a blueprint we developed at Indeed to sponsor individual open source projects. The way it works is every month, everyone who makes an open source contribution gets to vote on a dependency that we use, and the dependency that carries the most votes gets a $10,000 sponsorship from Indeed. No strings, no requirements, nothing attached to it. We've been doing that since 2019. The budget for that was $120,000 the first year we did it, and you see that reflected here. Most of the growth in the budget for project sponsorship came from adding the FOSS contributor fund. We also doubled our outreach sponsorships, so we were doing about six interns annually through outreach at that point. In 2020, you see a little incremental growth. It's not quite the significant change that we had from 2018 to 2019. Mostly what we added here was some additional interns through Major League Hacking, and then we added four quarterly payouts from the FOSS contributor fund as well. So we had $10,000 a month plus $10,000 a quarter that allowed us to participate in things like the Moss fund speed dating event that was run at FOSS Dam where different open source funders came and kind of pulled funds together and heard pitches from a lot of projects who were looking to secure some funding. In 2021, we see another big jump in the budget, and this is the first year that you see this people aspect of our funding show up, and this was entirely through an engagement with GitHub sponsors, in particular they have a GitHub sponsors for companies program that is still very much in beta. But GitHub sponsors as a framework, and in particular GitHub sponsors for companies as a framework doesn't distinguish between people and projects, we only use it to directly fund contributors. So if we give money to ESLint, we give money to ESLint through the FOSS contributor fund. If we give money to the maintainers of ESLint, we give money to the maintainers through GitHub sponsors, and that will hopefully make a little more sense as I get further down. And 2022, we haven't spent all of this money yet, but it's projected it's what we're expecting to go over the course of the year. We see another jump as we added some more foundations, NumFocus, the open source security foundation, the GraphQL foundation. Oh, I neglected to mention, I see 2020 we also added software freedom conservancy into the mix. So we added another foundation this year. This year we added NumFocus, open SSF, the GraphQL foundation, and we significantly expanded our outreach and sponsorship as well to the tune of about 15 interns over the course of this year. So in total, over five years, $2 million to nine foundations to 60 projects to about 36 interns and between 120 to 160 contributors that we fund directly through the GitHub sponsors for companies program. So that's the size and the scope of our funding. So when I originally put this together and said over three years we've given half a million dollars to people and projects, and then I sat down much later last night and I'd like to admit and pulled all the data together for this slide, I was really proud to see this number. And really grateful to indeed to see this number because I have talked to many of my peers and other open source program offices who have a hard time getting a couple of thousand dollars to do anything. So I'm very proud of this work and I'm very proud of what we've been able to accomplish here. One of my speaking problems is I talk fast and I don't know if you see me do that. That's for everybody. All right, so let's talk about funding foundations here means a lot of things. Usually when someone in the US sounds foundations, everyone thinks that's a nonprofit foundation but not necessarily. I'm using foundations to talk about 501C3s that are nonprofits, 501C6s like the Linux Foundation that are trade organizations. I'm also using it as a blanket term to refer to international organizations of a general nonprofit status, right? So I don't split a lot of hairs in here. It's a big umbrella term. The process for funding foundations is I choose them. I look out across the ecosystem. I listen to who I hear talking. I look at the work that they're doing and think about how I want to position indeed in the open source landscape and I put together sort of a blanket list of here are the foundations that I'd like to see a sponsor going forward. So I choose them but how do I choose? The first is that I start with the foundations that are at the center of the open source ecosystem like the Apache Software Foundation, right? The ones that may be technology neutral, vendor neutral that are doing broad work that crosses a lot of different strata. I then include foundations that are in relevant technology or language landscapes like the Python Software Foundation. And finally, I will add in some organizations that align to our core business interests like the Cloud Native Computing Foundation. That's just sort of how I approach it and, you know, in the beginning, the list of foundations that was in the budget was there because my executive sponsor had talked to some different people to get advice and asked about the budget. They said, well, here are some things that you should put in there. The Apache Software Foundation was very important to one of them so their sponsorship of the Apache Software Foundation was fairly high compared to the others. So sponsoring some of these for five years, what did we learn? All foundations are not created equal. We know this and we talk about this, but it's important to keep in mind that just because it says foundation or just because it exists and operates like a foundation doesn't mean that you can ignore due diligence when it comes to that organization, right? You still should pay attention to their operations. You still should pay attention to how well they're executing on their mission and you should pay attention to what's going on in the broader organization. And in particular, there are significant differences between 501c3s and 501c6s. 501c6s exist to serve the interests of their members and 501c3s exist to serve a common interest. And sometimes those overlap really well and sometimes they don't. So as you're looking at foundations, keep in mind that a foundation is not a foundation as if not a foundation, right? And in particular, as you start bringing in international organizations. The second is that benefits vary widely depending on the type of foundation. One of the reasons that sponsorships coming from the Linux foundation come along with so many perks is because c6s have fewer restrictions on what they can offer in exchange for membership to the organization by comparison to the open source initiative or to the Apache Software Foundation where what you can effectively get is a little bit of help telling your story and your logo on the sponsors page but their hands are tied. They can't be seen to give sort of an exchange of goods and services for membership because that runs afoul of the rules and regulations for c3s. So the benefits will vary widely depending on what kind of foundation and I mentioned earlier that the Python Software Foundation wraps up PyCon in connection with sponsorship of the Python Software Foundation and that enables them to offer a broader range of benefits. So you have to take each one of them kind of as they come. And because they're all different taking full advantage of all the benefits you're offered can be very, very time-consuming. I promise you that list of foundations that we sponsor, we do not take advantage of everything they give us. They all want to help us tell a story. They all want to help promote us. They all want to help do these other things. I could probably keep a full-time person occupied just taking advantage of the way with that, right? I don't sponsor the Apache Software Foundation on behalf of Indeed so that I can get all these wonderful perks. Interestingly, when I sponsor the Linux Foundation I do expect to take better advantage of those perks because they're business-related, right? So it can be time-consuming. My approach is to be kind of zen with the fact that I'll use some of them, some of them that I won't, but it also means that when it comes renewal time the foundations are very eager to say what can we do to make you happy because secretly they're also saying please renew. We're kind of stressed out about budget, right? And I have to go through the process of saying we're fine. I know we aren't taking advantage of it. We're going to continue to renew and so on. Next is that foundations need unrestricted funds in order to effectively manage their operations. Now here's what I mean by that. If you write a check to Software Freedom Conservancy and say you don't get Software Freedom Conservancy can't do anything with it except get, right? They still have to pay their own staff. They still have to get Karen and Bradley and company out to conferences to speak on behalf of Conservancy. They still have to run operations. They still have to run payroll. They still have to pay accountants. All of these things they still have to do, right? And so when you are writing checks for foundations on behalf of a project that's not the same thing as supporting the foundation. I love Outreach Eat Clearly. We're a big sponsor. But if I say these funds are only for interns the Outreach Eat Program cannot use them for operational overhead. So there's a little hand-waving when I said how many interns we sponsored because in the beginning I wanted to sponsor those interns specifically and after repeated conversations with Sage and Karen I started putting fewer restrictions on the funds put these into Outreach Eat using the way you see fit This next slide is super important and if you write checks on behalf of an organization please, if you only remember one slide remember this one inflation, recession and economic turmoil hit foundations before they hit your budget. The foundations that your organization sponsors are already worried about next year's renewals and they may not have even gotten through this years. Right? So the sooner you can say we're renewing, the sooner you can commit to renewing get your requisitions open and ensure that they know that that money is coming to them the more you will make their lives a little less stressful. Right? And at least so far in my experience our budget you know what do I want to say prices have already gone up for the foundations and your budget might not have caught up they might not have come in and said we need to change your budget so just remember if you're feeling those things in your life if your organization is feeling those things in your life the foundations are definitely feeling them and they're worried about it. Right? This is what it means to show up. So let's talk about funding projects our process again for funding projects is driven entirely through the FOS Contributor Fund we only do this by collecting vote of people that indeed who make open source contributions and nominated for that have to meet certain criteria they have to be used it indeed and use OSI approved license it can't be employee owned it has to be some way to pay them very hard to pay a project that doesn't want money it's a whole different ball of wax that we won't talk about today and if a senior leader comes and says we should give money to this project we nominated it for the FOS Contributor Fund so that's the entire process that we use for that by the way even though I don't have a link here if you google or duck duck go for FOS Contributor Fund you'll get to the book we wrote about it the creative commons license materials that you can use to implement the framework in your own organization and if you come up and talk to me afterwards I have a working group that meets every two weeks specifically of people who work in the open source funding space we all talk about this this model has also been adopted at sales force Johns Hopkins University Microsoft it was inspired centuries funding that they did last year for open source projects it was used as a blueprint at Spotify there are other organizations who are adopting the framework so how do we choose how do we get projects into the FOS Contributor Fund we ask leaders we ask developers we ask users for project recommendations all those nominations come from inside indeed we also use internal tooling, security tooling, compliance tooling to get insights into our dependencies this is one of the first things that we started to do once our team inherited the compliance toolchain we looked at the due diligence report to see what was in there the most and look to see what was fundable one of the greatest gifts that the GitHub sponsors program has given the open source funding ecosystem is that little funding.yaml file that is machine readable that says how projects would like to receive funding we didn't have that before and we're starting to see bits and pieces of that but we also wrote some tooling to go aggregate those across a bunch of projects to see who would set up this flag that they're looking for funding you can also interrogate your containers your build-up structure your developer tooling because that compliance toolchain will probably tell you all day long about the node dependencies your infrastructure and won't tell you anything about node right or NVM or any of those other things it'll tell you all kinds of information about your Python dependencies won't tell you anything about setup tools so you have to look at the images that you use to put your developer environments together the containers your build infrastructure and kind of go collect a lot of this information in our case we did it a lot by hand there are better tools now for interrogating containers if that's big in your organization that you can use we also listened to projects, maintainers and contributors and kind of industry tens this is kind of shorthand for I'm on Twitter too much but it's a little more than that we've had several cases in running the FOSS contributor fund where a maintainer has put out some kind of cry for help in the community that's either gotten picked up by a news article or picked up by Twitter and one of our internal employees one of our internal contributors posted in the OSS channel and that project gets nominated for the FOSS contributor fund right the first time I can remember this happening was was for Curl there was some article that made the rounds I think in 2019 about Daniel's work on Curl and him being largely underfunded or what have you when that got nominated for the FOSS fund it wasn't me, it was somebody else and for a very brief period we had made the largest single donation to Curl in the project's history so listening to those projects and maintainers and in particular using your contributors to help giving them a venue to amplify that information can also help service projects oh, this is in the wrong place this is a learning, so I'm going to jump into learnings and one of the first learnings is that using an open collective fund significantly simplified our procurement process the first two and a half years that we were running a contributor fund every payment to a project was a new ticket to procurement engaging with a new person getting a new vendor into the system right and for the ones that were using open collective that was easy, we could make a payment through open collective they were already in our system they were already added as a vendor they already understood what it meant the time that we gave money to the R project and needed to wire money to the University of Sheffield I think it was we had to explain this process to procurement open collective a couple of years ago put together this idea of an open collective fund where you can pool money in open collective projects get invoiced against that fund and you can pay them directly procurement loved that they could deal with one vendor they could have one sponsors agreement there was a transparent audit trail for how these funds were managed and it solved a lot of problems for us so that was a big learning and procurement overhead there were a lot of friction points for us as we were learning the fund the road to making good decisions is going to help you make some bad ones you just can't get there without it I was at a summit where there were some funders earlier this year in DC and one of the things that I've gotten interested in recently is the idea of expanding our current offerings to include a more formal grant making program along the lines of the Moss Support Awards for larger projects but they also have milestones and deliverables and they're managed more like a grant and Josh Greenberg from the Sloan Foundation who's very deeply involved in this space was sitting there and I said sort of half jokingly I figure if we go write some bad grants we're on how to write some good ones and he leaned in and he said that's actually how everybody does it in this space so you have to make some bad decisions to lead you to get to some good ones we gave $10,000 to Sentry when Sentry was well Sentry was selected for the FOSS Fund when Sentry was a heavily VC backed super profitable company not an outcome that I would have wanted it inspired them to take that money pass it on to their dependencies and give $150,000 to open source dependencies down the line so it was a good outcome one time Kubernetes won the FOSS contributor fund and there was a kerfuffle internally because some people rightly acknowledged this is not what I had in mind when we started the FOSS contributor fund and if this is you know the kinds of projects that were going to be funded they weren't interested in participating anymore my response in that case was next month we can have a better outcome and we did and the CNCF put that money into their scholarship fund and it was put to good use but you have to be comfortable getting some of those decisions wrong or maybe not being happy with them in order to refine your decision making process some projects don't know how to use funding effectively I made an assumption in the very beginning that if a project says here's how you can pay us that they knew what to do with the money when you did and that is definitively not true there were a number of projects that we gave money to that they hung on to one reason or another and projects have a lot of reasons for doing that but some projects have well matured governance structures in place and you show up with a $10,000 sponsorship and they know what to do with it immediately and some projects that's more money than the team has ever seen in the entire life of the project and they might need some help figuring out what to do with it access to funding will create unexpected outcomes even though the project may not know what to do with those funds interesting things will happen or can happen when they have access to them my favorite example of this was watching how ESLint started passing sponsorship funds on down to other contributors into ESLint they hit this point with funding on the project where they felt like they had enough and they were taking some of their sponsorship money and passing it on as sort of spot bonuses to people who were making individual contributions into the project that wouldn't have happened if they didn't have a surplus of funding right in a way even if they don't know what they're going to do with the money once they have it they can't make any decisions around all that thinking is academic until they actually have the money in front of them when we're making funding decisions in isolation we're missing out on important input I mentioned ESLint earlier and I love ESLint and I love the team the first time that we were in the FOS Fund maybe in the first quarter ESLint was picked as one of the projects and the same thing happened at Salesforce and the same thing happened at Microsoft so it's clearly a popular project not convinced it needed a third of the funding in that quarter right and I wouldn't want to go back and put my thumb on the scale for any of those outcomes because I think it's important to have the people in your organization engaged in the process and to have some skin when they're making those decisions but when we started talking together with each other as funders we started talking about holistic problems in the ecosystem how to help organizations in a way that you just can't do if you're only thinking about it through your own lens and the last thing is that I absolutely remember the projects that say thank you even if all they say is thank you and I also absolutely remember the ones that never say anything and this is more for people who might be on the project side of things if someone shows up to your project with an unexpected sponsorship in particular if it's a significant sponsorship just saying wow thank you this means something like that really goes a long way and there are some projects that have done very well in this space and there are projects that you know we made a significant donation to and we could not get them to respond to emails after the fact right which is it's a decision, it's not one that I would encourage we're about halfway through now I think we're doing okay so let's talk about funding people this is a new thing that we added last year and we're doing it again this year this is the project that I was describing when we're doing it entirely through the Get Up Sponsors for Companies Program and the process for this is entirely scripting and analysis and everyone here is going to ask if those tools are open source they are not yet that is my fault I am hung up on that wow they're really they're not great I just need to get them out there so that people can use them and help us improve them and I'll talk a little bit about how they work so how do we figure out which people we are going to directly sponsor through Get Up Sponsors for Companies so one of the first things we did is we tried to get to know the contributors of our core dependencies this was prescripting days we figured out which were our most widely used dependencies we went to repos we started poking around to see if we could figure out who those people were and started kind of accumulated a bit of a list and then we in particular started to look for contributors that spanned multiple dependencies and this was where the scripting came in and the shortest possible explanation is it takes a list of dependencies it figures out what we can find on GitHub it iterates all the contributors for all of those dependencies on GitHub figuring out how many contributions they've made to how many projects and then like any multi-billion dollar corporation we export to Google spreadsheets and we do a lot of sorting to get to the tops of different categories we also break it into language ecosystems because JavaScript dependencies and Java dependencies have very different models of operating so if you just search for how many contributions someone has made the top list is all going to be JavaScript dependencies because those contributions tend to be very granular we see this born out in the Linux Foundation Census 2 data a couple years ago when they were attempting to figure out what were the most popular dependencies and they had to break it into JavaScript and the rest of the world so we in particular sorted on how many projects have they touched and how many contributions have they touched have they made into those projects and then we also factored in the business criticality of projects that use the dependencies now our evolution on this is a little primitive right now but we're working towards something that I'm very excited about and hope to be able to talk about next year business criticality in this case was just how many times does it show up in our infrastructure right, how many times do we see this dependency in our own dependency graph better than nothing but it's not a great metric the project that processes all your financial transactions it's only going to show up one time but that might be the most important piece of code in your organization right Julia Farioli did a great talk at the State of the Source a couple years ago about how to identify critical dependencies and in particular their approach to doing it like Google where their approach was to use CPU time which was really interesting but FFmpeg showed up at the top of list for everything because of how YouTube uses it heavily right, so look for other factors that you can include when you're measuring business criticality of your dependencies we use a very floppy metric how many times do we see it in the graph better than nothing but it was the approach that we use and then like I said we count projects we count contributions we sort to find potential candidates and then we have to go do a bunch of reading so we end up with maybe the top 50 potential candidates in each language ecosystem or by each dimension and then we one by one go to their sponsor pages and look and kind of see what we can learn about them there's just no substitute so far for going and looking and making a judgment there's been lots of tools to try to do that but I don't think there's any substitute for it so that's a lot what did we learn in that process is that bespoke sponsorship levels and perks make it impossible to track transactional benefits who backs stuff on Kickstarter anybody all right hand up if you back more than 10 all right I backed like 300 the updates process the thing that you get project updates for Kickstarter totally useless to me because I get them all the time it's just noise it's like depend about updates if you're watching a lot of projects there's so much in these transactional benefits if you're if you're trying to take advantage of them and you're backing you're sponsoring contributors at scale it's impossible because every contributor can set their own sponsorship amounts they can set their own perks for what you get for those sponsorships and there's a little bit of convergence within language ecosystem which is kind of interesting the python language ecosystem tends to do things in the $16, $30, $64 $1,024 a month don't know why specifically there all of the popular JavaScript maintainers I've seen they do them in some of more even hundreds and they all have like a $6,000 a month level I don't know why it's just what has emerged out of those language ecosystems so we currently are sponsoring 118 individual contributors through github sponsors to my knowledge and based on my conversation with Jessica Lord who's the product manager for that offering at github and through my own investigation certainly in terms of breadth of people that we're sponsoring indeed is one of the is the broadest sponsor and it's really hard we just can't keep track of we try to look at the people that we sponsor for the higher dollar amounts and we try to take advantage of some of those things and the rest we just have to let go my approach to Kickstarter stuff it shows up if it shows up when it shows up and that's kind of how I feel about these other perks so that's really tough I don't see the platform making that easier for anyone and I add it here as a learning because I think if you're going to sponsor contributors like this you have to let go of some of those perks just like you have to let go of them for when you're sponsoring foundations second is that auditing is needed to figure out if contributors have gone idle or priorities have changed we do this every quarter when we make a decision to sponsor someone we commit to sponsoring them for a year and then every quarter we look back at the ones we sponsored a year ago to see if they're still active to see if that dependency is still in use to make sure that it's still someone that we should be sponsoring right away we had someone who decided they were quitting open source for a while we were sponsoring them for a fairly high level so if we continued to sponsor them for another year they wouldn't have done anything and I think there's a reasonable build about are you sponsoring past contributions or expected future contributions but I won't get deeply into that responsible contributors locked broad diversity across all dimensions the first time we put together a list of the top 20 contributors to our dependencies they were all from the same region they were all from the same demographics which means if you care about diversity and in particular if you care about spreading your funding around so that you're not perpetuating existing bias in the funding ecosystem you have to put in the work to go find those people because they're not coming up if you just sort by number of contributions or value of the projects this is an area where I think GitHub could do more and should do more to help highlight some of those contributors because a lot of them you have the opportunity to be their first or their second or third sponsor and if you're doing that on behalf of your organization that can mean something to them in attracting other sponsors and lastly some contributors have extremely modest goals that could be solved by just a few of us when you look at people who've set up their GitHub sponsors page some of them will say I want 10 sponsors 10 really? you're asking for $5 a month you're hoping to get 10 sponsors I think we can help you get there a lot of them they don't want a second income from this they just want to be acknowledged or sometimes you'll see sponsor perks that say I would like enough to take my partner for dinner once a month surely we can pull that together right so that's an overview of learnings from funding people I want to talk briefly because I said I was going to do it in the outset or at least in the original description about funding interns because we've spent a fair amount there I'll take I'll have like 15 minutes for questions if you can be patient alright cool I'll spend a little bit talking about funding interns and this was in the original description in the end I decided I didn't want to talk too much about it so how do we choose them? we don't we try to say hands off on sponsoring interns we write checks to Outreachy and ask Outreachy to run their program because Outreachy knows what they're doing right and I say usually because there's one case where we did get directly involved in selection of sponsor of interns and it wasn't through Outreachy it was through Major League Hacking 2020 hit COVID hit a lot of things shut down a lot of spending contracted in organizations because no one knew what was going to happen indeed has a partnership with a free boot camp called Tectonica that operates out of the Bay Area it is a free boot camp for women and non-binary adults to make transitions into tech careers we're having a really hard time finding placement because that class was graduating right then we worked with Major League Hacking and Major League Hacking pulled in another sponsor, Twilio in this case to put together a Major League Hacking intern cohort specifically for that group of Tectonicans the only time we've gotten directly involved in funding interns mostly we try to leave that process up to them what did we learn those programs can only scale effectively when they have those unrestricted funds every company wants to say we're sponsoring 36 interns through Outreachy nobody wants to say we're funding Outreachy general operations because that doesn't connect with people in the same emotional way but it's so much more important in some ways and you have to remember those interns they also face inflation recession, economic disruption if you're sponsoring Outreachy they're prepared for costs for them to have gone up and if we keep our budgets flat they're trying to do the same or more with way less funding so final observations, things I see in the ecosystem and things that I hope will come in the ecosystem and then we'll have plenty of time for questions in my opinion the tooling isn't there yet but it helps there are all kinds of tools and platforms and offerings and solutions that want to make funding open source easy for you and they all do an okay job to varying degrees but if you're funding at scale through GitHub sponsors the platform isn't ready for you yet if you're funding through something like Thanks.dev they're doing a great job of really trying to filter funding out to the entire dependency tree it's not it's not there yet, it's going to continue to evolve we need more tools we need to help develop those tools we need to share our own tools one of the things we need badly is a cross ecosystem machine readable piece of funding information for projects and maintainers the funding YAML from GitHub sponsors gave us this a bit NPM Fund will give it to you for NPM and every different language ecosystem has their own way of doing this now there is a project called Ecosystem.ms I don't know how to spell it that is sort of the next iteration of what libraries.io was by some of the same people that are involved and this is one of the things that they're trying to solve but in particular for people in organizations who are involved in funding without that machine readable information there's tens of thousands of dependencies in IndeedStack I cannot go through that by hand we need that information scoped projects requests are hard to find but they're high impact and easy to fund the Python software foundation has a repo called Fundable Improvements these are things that you can pay to have happen it is so easy to look at something with a price tag and go back and make a case for it to your executive sponsor when compared to we should give $50,000 to the Python Software Foundation because they need it and they own a lot of stuff so those scoped improvements you can say this is exactly what they're going to get this is what we have on our budget there's just not a lot of people that are using that and making them available but if you can find them I think they're higher impact and easier to fund and I want to encourage more projects to use those organizations are going to keep funding open source it's a fancy way to say we're going to keep throwing money at the problem because throwing money at the problem is an easier level easier lever to throw labor at the problem you talk to most organizations they can write you a $50,000 check easier than they can give you a couple weeks of a developer the scale of price difference between those two things just doesn't compare so if organizations are going to keep funding we have an obligation and a responsibility to help them make the best decisions they can no tool or foundation will replace the value of digging into your dependencies you might not have time you might not have resources you might not have the things that you need to dig in your dependencies and that's fine if that's where you are but when you pay to offload this complexity to another organization whether it's a foundation or a tool you're not internalizing that learning you're not feeling and understanding the pain in a way that helps you make the best decisions that you can and finally if you only have enough budget to do one thing support an open source nonprofit organization these are people who are deep in the space they understand the landscape they understand the work that needs to be done and you might have a small organization and you don't know where to go with dependencies find a nonprofit organization doing work that you agree with write them a check they can be more effective and efficient with those funds 45 minutes I think I did pretty good and I left 15 minutes for questions so you had a question over there the question is we hear a lot of conversation about why aren't organizations funding open source why isn't this room full of people who want to know how given that we've laid out a good roadmap A the room is a little hard to find even for me so I don't but I don't think that's the real problem I don't know I would love to see more organizations here it could be that some of the people some of the decision makers who are involved in that funding aren't here this may not be their this is a heavily user focused conference they're here as sponsors but they may not actually be in the room so I don't know the right answer to that question as intimidating as it would be I would love to have a packed room with so many people that we can continue to talk about this what I think is important is that if the people in this room want to see their organizations involved more in funding an open source talk about this work talk about the other organizations that are doing it talk about the models that are being published on how to do it and send them to me I've got a stack of business cards I'm going on leave for a month but I'll be happy to respond to that the question is have we done any look investigation into how to articulate return on investment for the money that we've spent to these projects and that information could be compelling for executive sponsors we're getting there I'll talk a little bit about where we're headed with that another one of the problems in the vast dependency ecosystem is that a single place that you can go to answer the question is this healthy is it healthier now than it was a week ago or a month ago or a year ago there are tools that are good at articulating measures of health the chaos community which Don, Dr. Foster is a very active member they in particular have a working group called the risk metrics working group that is all about assessing the risk of a dependency and there are tools owned by the chaos community such as AUGAR that gather and surface those metrics for dependencies over time where we're headed right now we're standing up an AUGAR pilot inside indeed and we're going to feed it a bunch of dependency information so we can start tracking that dependency health over time and once we have that then we can do some before and after looking at funding interventions not just on our part but on the communities part and there's other work that's in flight to do for this as well one of the work streams at the open source security foundation is to aggregate what they're calling a vendor neutral metrics platform for open source dependencies once all this information is in one place then you can start to see are these interventions making any difference so we're getting there okay is it a follow on to that because okay I haven't forgotten your question the question is there a case to be made for doing this because it makes you a good open source citizen that is absolutely the case that I have been making in the work that I have been doing it indeed I think that people connect to stories more than they connect to data I think if you find yourself in a position where you're telling a good story and someone wants data they're already disagreeing with you they're looking for either a reason to back up that disagreement or something that might change their mind but people don't fund outreachy interns because they can see a measurable return on investment for them and their organization they fund outreachy interns because they believe it's the right thing to do over the course of funding open source when I have had these conversations with my executive sponsors I pull up stories like we were the largest single contributor to the CURL project in their history for like two weeks our work inspired sentry to turn around and give a bunch of money to open source projects that they depend on and other stories like that so those stories can be compelling gotcha the follow on observation there was that occasionally they get questions from customers about their involvement in the community if they're not seen as a funder in the community so you had a question hi Melissa we gave you ten thousand dollars couple of years ago, yeah alright awesome, thank you so the question from Melissa from the Ganove Foundation was asking me to clarify when I mentioned earlier that not all of the sponsorship perks that you get from sponsoring a foundation connect with me and so what are things that might or what are some things that the foundations can ask I think one of the most important questions to ask when someone shows up as a sponsor is what do you want out of the sponsorship why are you sponsoring because there's a very significant difference between I'm sponsoring GNOME because we build services on top of GNOME and we need to be seen by our customers as someone who's involved in the foundation versus we showed up with a check to GNOME because they happened to be in the news at the time around a patent troll and it caught people's attention and that carried the vote for FOS Contributor Fund when sponsoring individual foundations every organization who shows up to sponsor that foundation is going to have a different set of motivations and it's very difficult and I'm sure you already know this to put together a sponsorship prospectus to everybody so if you understand what they want out of it and what their reasoning is for deciding to sponsor you specifically that can help you tailor some of your response to that the things that personally connect with me I'm to a certain budget level I'm fine with our logo shows up and I know you've got a hell of a lot of infrastructure to run so you just go do what you need to do right if I'm going to write a $25,000 check for a conference then I'm probably going to start getting a little transactional about the process so I guess if I was to use the Python Software Foundation as another example even though their PyCon benefits and their Python Software Foundation benefits are wrapped up together in sort of one ball we still end up not taking a great deal of advantage of the PyCon benefits because I'm not out there to market if we sponsor a booth we don't scan badges anymore because if we scan badges now we've got badge data now we've got a data governance problem now there's privacy issues and I don't want it because I'm not going to go around to all of those people and try to sell them indeed if we're actively hiring different story we might take that or talent acquisition or something like that but that's my thoughts there oh no you are not one of the people that I was thinking of I actually didn't know you were in the room and I wouldn't have named names anyway so other questions thank you so much for coming out and putting out oh you have a question almost missed you sure so how I have seen things change once we started working and collaborating with other organizations a slight correction because you mentioned the thing that we were doing at the beginning of 2020 I was involved in putting together this effort called FOSS Responders that was all about mobilizing funds for conference organizers who had last-minute cancellations due to COVID and were suddenly out deposits and so on FOSS Responders is separate from the FOSS contributor fund that was sort of a one-off push to help those organizations and also help those of us who felt paralyzed as COVID was coming on and watching all of this come apart like help us participate in something that was meaningful but to your actual question how have I seen things evolve as I started working with other organizations I'm hearing in the conversations this desire to push toward publishing frameworks publishing artifacts publishing other models that other people can use and seeing a broader sharing about this than I did before the first time that I sat down with anybody in the funding ecosystem to ask about how they made decisions about who they funded and what they funded and everything else it was Cat Almond who was managing a lot of that sponsorship stuff for Google and the conversation we were having was very much I can sit here with you and you can show me some of your things but we can't talk about them externally what I see now that I didn't see four years ago are things like Sentry's blog post where they say we gave away $154,000 and $700 to open source maintainers and here's the process we used for getting to that number that's awesome Spotify just did another one very recently where they did their own one and done FOSS fund which is another approach that I've seen a couple organizations do rather than do this as a monthly or quarterly thing we do it once a year so we do the analysis and then we don't have to do it the rest of the time that's what Spotify just did Microsoft they were sponsoring through their FOSS fund and a GitHub repo some of this information is coming more public this is the most public I've ever been with our data and I'm hoping that that will continue to encourage people who are involved in funding to be open about their processes and their data as well that's the biggest change I've seen and we're two minutes from the end thank you so much for coming, for finding the room for sticking around, I'm around if you have questions have a good rest of your scale hello oh hey that's pretty good hello can everyone hear me okay excellent alright it's not usually that deep but after a night of catching up with scale we're just lucky there's a voice at all that has happened in past scales I lost my voice after the morning of my talk okay but it was fine so first I would like to congratulate all of you for finding this room well done I suspect it like in about 10 minutes or so we'll have some other people who finally after going to every little area on the second floor will come down here they'll actually ask somebody and show up so let's just be very kind to those people as they filter in they won't miss too much just some intro it'll be fine yeah exactly I just now have my bearings I think for where things are at scale so anyway so my name is Kyle Rankin I am the president of a company called Purism and we make freedom and privacy respecting laptops and phones like that one thank you very much that it has a lot of interesting security features but I'm not really going to I'm going to be talking about some of that a little bit maybe but really what I'm going to be talking about today is snitching on apps that snitch on you and you can see the link at the bottom here this link is for this talk so you can you know if you want to take pictures of slides you can or you can just write that down or take a picture of that URL knowing that you can always get all of the slides and all that information there so let's introduce the talk phones are our most personal computers this is a computer that we keep on our person most of our waking hours and for some of you maybe even past that they're also not only are they our most personal computers well a couple of things one not only are they on our person they typically have things like all of our pictures that we take they have all kinds of data that we store on them that is pretty personal information messages that we send back and forth to our loved ones some of them maybe that you wouldn't want the rest of the world to see stuff like that right very very personal they also contain more sensors than your average computer does and so there's gyroscopes there's compasses, light sensors GPS there's all kinds of extra sensors that your phone has particularly vulnerable to privacy abuses because of all of the data you can get from all those extra sensors and the fact that you always have it on your person on all the time so you can get an incoming call for people who still use their phone for calls and so it's very vulnerable to this sort of thing Android and iOS in general hide from you how apps snitch on you when I'm talking about snitching what I'm talking about is applications that get some data about you from the sensors or from whatever they can see and send it back to the app vendor now when I say hide I'm not talking about the fact that all both these OS's will show you what permissions an application wants to install and you approve it that's not really what I'm talking about I'm talking about once you do that and say for instance that you grant access to an app to talk to the internet and to have your contact list that you approve all of that at that point you don't really have an easy ability because of security features that are built into the OS's but you don't have an easy ability to inspect what those apps are doing from the office you have to go to extra levels of people who do a lot of research in this space will often they'll set up a very like a special router that their phone will connect to and then they'll do all of their inspection from the router if they want to see what kind of traffic is this app actually using they have to use an intermediary these Android and iOS hide the snitching that the OS itself does even more than it would hide what the apps do there's this implicit trust that the OS believes you should give it an implicit opt in to collect all of the data and share all of the data with the OS vendor and that's even more obscured from you as a user again there's security restrictions around some of that to lock down the OS so you can't see that sort of thing if you wanted to inspect that you generally have to add some level of router in between that sort of it's not just phones though I mean it's always been a problem like spyware is not anything new malware is not anything new but it's a growing problem on desktop OS's are also taking a lot of clues from the smartphone market in particular with how smartphones handle application installations so app stores and the model of I'm going to release an application that costs a buck or two like shareware type model but we know that most of those applications which I'll get to in a little bit how they're funded usually from data and that model is moving to desktop OS's so it's now becoming a growing problem that when something you install in your OS might also be snitching on you to the vendor this is less of a problem on Linux largely because of the free software and open source software that's on Linux it's not that you can't add metrics collection in a Linux application or something in an open source application you certainly could do that it's more that you can't hide that very easily and more importantly because if it's under some sort of free software license if someone doesn't like that you did that they can fork your code and release a version that doesn't do those things so as a result developers are less incentivized to write applications on that platform that do those sorts of things because it's very easy for someone just to release an alternative but notice I said less of a problem it's not that it's not a problem at all there are some instances in particular where it comes to collecting metrics you'll still see some of that we'll talk about that so if you want to catch snitches the best way to do that is to figure out a way to audit all of your network traffic that's going from your device again traditionally when someone wants to do this they will set up a router in between where they have some sort of span port or something where they can monitor all the traffic and go from there to snitch which is a desktop friendly tool written for Linux that allows you to do just that on your Linux desktop and it's not simply just for snitching although it's very useful for that also just malicious activity if you have an application that's doing unusual network traffic that's making unusual requests to things on the internet that's unexpected that should raise an eyebrow and because a lot of the ways that applications on Linux will root Linux is by having you either install something or replace a tool that you already are using with a backdoor version that then starts reporting to the mothership something like this can also potentially detect that sort of thing alright so in this talk I'm going to talk about the fact that your phone snitches on you your desktop also snitches on you I'm going to talk about how you approach firewalls when you're talking about a desktop OS instead of a server because it's different but as someone who's been in security for a long time and in fields a lot of questions about security one of the most common questions I get about for example at Purism is well what's the firewall on PurOS like what firewall is installed by default and the people that ask those questions are very well meaning and concerned about security but the way that you approach a firewall on a desktop computer I'm going to argue at least is different than how you would approach it for a server then we'll talk about open snitch specifically so what it actually is I'm going to talk about just basic things you would want to know how to install it's a little bit different potentially for your distribution it's something that you need to train and so that's a big part of using this tool is learning how to train it so I'm going to talk about that and in general how to use the program and talk a little bit about some of my future plans for how to use it where I would like to see it go in the future alright so your phone snitches on you there is an entire ecosystem out there that is all devoted to capturing and selling your data I don't think that anyone here is probably too surprised at that fact I mean privacy awareness is at a very high level now especially compared to say 10 years ago and people have long told me or have said online people don't care about privacy and I've always said no that's not true I think people are generally just either unaware of what's happening to their data or even if they are aware they feel so they don't feel empowered to do anything about it because they're just an individual versus these very large companies that have very a huge amount of resources to bring the bear to this I mean we have an entire generation of computer scientists who are spending all of their waking hours figuring out the best way to collect data about people and send it to their vendor I mean that's the problem that computer science is solving in the beginning of the 21st century I mean think about how many well-paid engineers are doing that 24-7 well only 80 hours a week depending on the start up now for the most part the applications that are doing this are proprietary applications and you can understand why that is because if something's doing something shady if an application's snitching on you it's way easier to do that if you can't inspect the source code and see what it's doing, how it's doing it and that is doing it to begin with for the most part on these phone platforms the majority of the applications have some sort of a shareware type model where it's either sort of free in a limited form or you can pay for a pro version a couple of bucks or whatever but it's all you're not generally getting a lot of here's certainly free software on these platforms don't get me wrong but the majority of the software that people install is not and it's funded by in some way either directly or in some subsidy by the data that it can collect now here's the thing about the iOS app store it actually snitches on apps that want to share data which is a really good feature that it does this has been improved over time so more recently even more so where an application that wants data and wants to phone back home like iOS app stores way better about giving you information about what the application is going to do which is to its credit the downside is that it doesn't include itself in that and there's some loopholes so because again there's a notion with these OS vendors that yes your privacy is very important from everybody else but you should implicitly trust us because we have root on everything anyway and so trust so anyway there's some loopholes to some of that which these are hyperlinked and I have notes at the end for all of the things I'm referencing but that's sort of like a downside to it is that third parties definitely have a lot of restrictions but the OS itself doesn't necessarily fall under that notification Android and iOS like I think most people have a sense that Android does this and if you get a stock of Android you're sort of like yeah of course I'm sending data all the time but there's a study this is a couple of years old now so the numbers may be different now but this particular study found that both OSes phone home every four and a half minutes on average if you have the machine, if you have the phone on some of the data that it sends includes the IMEI, this is all very unique identifying data for instance IMEI which is a unique identifier that identifies your cellular modem the hardware serial numbers SIMS MZ which is a unique identifier on your SIM card telemetry data which is of course like the things that your phone is doing MAX in some cases of nearby devices that it can detect and GPS location which I think most people would conclude that most of this is pretty personal information pretty identifying also this is not something you can opt out of this is baked into the OS and this is just something that the OS does to opt out of this means installing something else essentially finding some alternative OS if you can to install that you can inspect somehow to make sure that it doesn't include these bits or use something that doesn't run Android or iOS I suppose but I can go back and sign you it's not just your phone your desktop also snitches on you I should have a slide I should have a talk your car also snitches on you now and it's a huge problem but that's a different talk but let's just start with like your desktop is becoming a smartphone just like your car is becoming a smartphone the model is just so lucrative that it's hard for people to turn down easy money like that many phone app practices are now moving to desktop OS have seen the power of being able to dictate and have a funnel through which they can filter what applications are allowed to be installed on their platform this is a power that they didn't necessarily have before in the past on a desktop computer those of us that have been using them for a long time are used to being able to pretty much install if an application says it supports Windows 10 or whatever or supports macOS whatever version you know that I can get it downloaded or back in the old days get some sort of media install media install it and run it phones don't really work that way right phones you have an app store of some kind that filters and dictates what things you can install and again there are some security reasons for that there are also some control reasons for that and and there's a lot of money being made in these app stores right now because anything that does charge money generally has to kick back to the vendor of the app store for the privilege of being on the platform and there's no way around that on the phones everyone realizes how much money you can make if you could also control all the apps that are installed on a desktop computer and so most desktop proprietary desktop OS are going to are starting you can already see them moving to this model and I think it's going to be this trend is going to move to where it's very similar to smart phones ultimately the main limiting factors are the fact that people right now have this sense of how it used to be and so there are people if it would happen immediately everyone would be up in arms so it has to kind of happen gradually number one and two a lot more the infrastructure has to go into place like you have to have the ability to wholesale restrict what binaries are allowed to run on the platform before you can do that you have to have some level of security lock in for what apps can run in a wholesale way before this completely works so some ways that desktops can snitch on you so macOS this is a security feature but it will ask Apple for permission each time you run a signed application so this is all in the name of if you have an application that's been signed by a vendor and you want to make sure that the one that you're running is legit and has been tampered with you can it's all been signed with a certificate from that vendor Apple will then compare that signature with using certificates which means that whenever you launch the application if you have a network connection it will probe Apple to see whether or not that it's legitimate and if so it will allow you to run the application now of course that releases this new story came out a while ago and there were some some conclusions jumped to about how severe this was that at first and then things sort of clarified a little bit where at first people had the sense that they know every after running and when you're running it and all this stuff and it's not necessarily the case because it's based on vendor certificates they know the vendor who signed the application that you're running if that vendor signs a lot of applications then it could be one of any of them if it's a small vendor that only has one application then I guess you could derive that I guess infer that but anyway so that's what Mecha West does it also now lets some of its own traffic bypass firewalls that you may have on the system this is something that came up as part of the same story because people were using talking about the fact that trying to trace this network traffic and little snitch which is what the program I'm going to be talking about is based off of in spirit like trying to implement the same features complained that newer versions of macOS were starting to again for security reasons but restrict the ability of user space applications to inspect traffic from certain services and the idea was to prevent it from things like this that are security sensitive not allowing something in user space to potentially modify it or tamper with it so there's a legitimate reason for it there's also this sense now that even a tool snitch doesn't necessarily have a wholesale view of what the OS is doing it has a more filtered view Windows 11 also collects telemetry you can disable it but it's on by default and there's a guide I linked to it in the back if you're using Windows 11 there's steps to going through a long list of things just like disabling things on Facebook I guess you can eventually find it and disable stuff to their credit Linux is generally better for the reasons I said at the beginning of this talk but some apps may surprise you I found this out when I started running OpenSnitch on my phone and I figured well the training is going to be pretty simple I'm going to say yeah Firefox is allowed to talk to 443 and 80 because those are the ports that it talks to on the internet and other internet applications the few that there are mostly the same sort of deal I was surprised when I launched GNOME Calculator and I got an alert popped up that said it's trying to resolve imf.org what's going on there why does my calculator need internet access that feels very android-y to me but I didn't look into the code the documentation doesn't really talk about this very much and I haven't inspected the code to figure out why I suspect it has something to do with IMF National Monetary Fund I assume it has something to do with the financial calculator part of that application I never allowed the IMF.org to see what ports it was connecting to after it got the DNS resolution so I'm not sure but anyway that's just a surprising example another surprising example is if you on Linux using a tool like this buy something from eBay let's say or say you visit eBay I do in many cases where I will have some javascript block like I use no script myself which allows you to control what javascript is running on the platform and I block most things especially things that are obviously marketing metrics gathering stuff I mean that never goes through but OpusNich let me know that the javascript from eBay that was allowed through was trying to make this random UDP port connection to something called like not metrics.net but something like that like okay cool so even the minimal javascript I was running it wasn't again not connecting to 443 or anything it's connecting to some random UDP port on Linux it was doing this so let's talk a little bit about why firewalls are different when you're talking about a desktop OS compared to a server traditionally when you read documentation on firewalls I've written books about this subject myself and usually when you're talking about firewalls in the context of a server and when you're talking about a server usually the focus on a firewall for a server is services that are listening on some sort of port and when you're thinking of firewalls the goal is to block incoming traffic going to those ports based on some sort of rules maybe you only want to allow servers from a particular subnet to talk to it or you want to restrict it in that way or other restrictions like incoming traffic or in like in this parlance the jargon would be ingress traffic so incoming traffic is what you're really focusing on blocking now some administrators go the extra mile for the sake of security and they will also inspect and block egress traffic outgoing traffic well that's great it's rarer more nowadays people are more focused on that but especially traditionally when people are talking about firewalls they're basically saying build this big wall between me and the internet and stop everything coming at me and they're not really thinking about the things going out in fact since we're all sitting here wearing masks it's a lot like before the pandemic those of us who did a lot of housework and that sort of thing like woodworking and had in 95 masks going around in our house because we used them like any fine particulate matter like 50 measure but then you discovered after this pandemic that some in 95 masks have a little vent for the convenience of the wearer when they're doing all this work because as you know you're wearing a mask all day it does make it more difficult to exhale and if you're working on doing woodworking or whatever they had this little vent that basically had no outward filter so you breathing in for sure outward and there were some people when we were talking about mask policy and that sort of thing they're saying you should avoid using those in public if you're concerned about not just filtering particulates coming into you but also if you have something that you're sharing with the rest of the world and again this is similar to how a lot of server administrators view firewalls for servers and traditionally firewalls it's like we don't really care what's going out we just care about what's coming in by default this hasn't always been this way but over the last let's say 5 years at least maybe even 10 years usually these days desktop installed OS's have no public listening services on them again in the olden days there's this notion of well it's everything's a server and a desktop if you're running Linux so you would have like all this crazy stuff running but more recently most distributions turn off everything including things like a secure shell or anything like that because they say you shouldn't really need that if you're running a desktop now you can always install it later if you're such an expert but by default you don't need it and so for the most part when people talk about well what's the firewall on my desktop OS I would argue if you're worried about incoming traffic and again there's put an asterisk here because you're going to say well what about but for the most part if you don't have a service listening there's nothing for you to block incoming wise what you could potentially have malware that then opens the port or whatever but that's a separate issue but for the most part again there's nothing to block on a desktop what you really need to focus on is all the outgoing traffic the way that most people have issues on their desktop when you're talking about exploits or snitching too but the concern from a security perspective is you have a either javascript that you downloaded from a website and running from a browser or you installed an application outside of approved channels or something and now it's trying to connect to its command control server and get instructions and that it's talking out to the network and then once at that point then it has a connection established and then you can establish remote control up until that point you wouldn't because it has to reach out first before something knows that it's infected and can reach back in so for a desktop I would argue the first focus is all outgoing traffic now after that if you want to think about maybe but to me that's not the first focus it always should be outgoing traffic so when you do this you catch apps that phone home like applications that are sending metrics of some time usage data other things that maybe you didn't want to share you also are detecting apps that shouldn't need the network that are using the network like my GNOME calculator example like why is my calculator trying to talk on the internet it's a calculator the other thing that this lets you do is detect malicious software because again there's software that like why is this random program talking out or more importantly things like why is W get running right now why is curl running right now I didn't execute it things like that that raise an eyebrow a desktop firewall can help you detect that the other thing that you want to be able to do with this and this is something open snitch can do is you want to be able to block traffic based on the application and the remote connection details traditionally with a server firewall you're more focused on IP addresses and ports not necessarily whatever applications running remotely because you're hosting a local service but when you're talking about a desktop firewall you want to allow some programs certain levels of network access that you wouldn't grant to others and some programs you may not want to grant any network access and some you may want to most of them you want to restrict only to certain ports or certain remote connections a desktop firewall should also fail safe which means deny by default if it has a connection that's being established and it doesn't know what to do it should default to allowing it through and hoping that later you will detect that and do something about it it should block it first and then if you're not there because and this is particularly important for phones but anything that's on all the time is going to be left unattended sometimes and if you have especially if you have a system that's based on an alert popping up asking you to authorize something if you're not there to see it because you went to lunch what should it do what it should do is block it for now and then later you will hopefully see the alert again when the application makes another attempt OpenSnitch does these things so what is OpenSnitch it's an application firewall for Linux and it's based off of macOS's Little Snitch people in security that run Macs love Little Snitch it has a service that connects to the kernel using there's a couple of different ways that it can connect to the kernel to inspect processes it has a number depending on what features are in your kernel it can use a couple of different ones it also uses either I'm going to talk about this it monitors outbound traffic using either IP tables or in-app tables depending on what your system supports it also uses in-app tables or IP tables to create firewall rules that can then enforce whatever your policy is it provides a nice UI that lets you manage all of your rules that lets you see all the events that are happening and also it allows you to manage your rules a separate UI pop-up that will happen whenever a new outbound connection occurs you will get an alert that pops up that lets you see what's going on right now and then allows you to set a policy for this new unknown connection that doesn't have a rule already so that it can store it if you don't respond in a certain period of time this is configurable it will temporarily stop this connection now what temporary means is configurable it could be a number of minutes it could be until the next reboot this is all configurable and rules themselves can expire or be permanent so if I set a allow or a block rule I can say until the next reboot for the next 15 minutes or I can say always and forever and if I do that I can also always go back and change it different rules for example you could allow Firefox to talk to Port 443 to everything you could restrict it to only allow that to certain hosts you can do that for pretty much any application and control it on a per app level and a lot of different levels so how do you install this it's not in all distributions at the moment it's still a relatively new project It also is under somewhat new management over the last couple of years. It existed and then sort of went astagnar for a couple of years and then was picked up again, which is great. And so now it's sort of, now there's actually quite a bit of rapid development on it, but it's still not necessarily packaged for all OSes. So the easiest install, they actually provide packages for a lot of OSes. Some of the challenge has more to do, part of it is that it's a Python app that just does pip install for all this stuff, so you're bringing in all these things. Part of it is that, from my understanding, from people who are looking to package it in Debian was that there are build dependencies, not necessarily application dependencies, but build time dependencies that were different than what was available in the OS, so it's more challenging to package for the OS. All right, we're back in business. Okay, where was I? To install, go to this release repo and install the package. They have Devs and RPMs. I guess I have to really, here he goes, this is very similar to what I had before, all right. All right. It's also built for X86 and ARM, which is really cool because in packages for both of those platforms, so you could install the Sunrise Berry Pi, for instance, or a Libran 5 phone that runs ARM, for instance. Installs, there's basically two packages. There is a service, and that's the thing that talks to the kernel and does all the firewall rules, and then there's a Python UI package that is separate. They're both two separate things. Here's an example of the UI. There's a lot of information for an application like this to display, so you're definitely getting strong enterprise app vibes when you're looking at this. So a lot of tabs, a lot of columns, a little spreadsheet-y a little bit. But again, there's a lot of information to show. So you can see right now, here's the default events tab. What the default events tab does is it shows you current things that it's doing, usually because you set up a rule to deny traffic or allow traffic. Something happened, it's applying its rules and then it's showing you what it did. So say you went away somewhere and you came back to your computer and you wondered, did a new rule spawn that blocks something while I was away? You can go look in this first events tab and see that. Well, let me go back to this. And you can see there's all kinds of other tabs. This is where you would do things like modify rules on the fly or just make up new rules, that sort of thing. You can do all of that from this very extensive interface. The nice thing is that it's adaptive. So this is a screenshot running on my phone. What I mean by adaptive is the corollary, I would say, is responsive web design for websites where you have a website that when it's on a phone screen, in a small screen, it adapts all of its widgets to fit. An adaptive UI means the same thing only for a desktop application. So if you have an application that when the corner is dragged, so it's small, so it's like the size of a phone screen, it does the right thing, whatever that means, it's usable. Now, an application like this with all of the information it's showing in all the tabs, it's going to be challenging to make that super usable on a small screen. But just surprisingly well, I was not expecting it when I was installing it. I just figured, great, I'm going to have an application that's running off half the screen. But they actually put some effort into making it adaptive. And while it's a little bit difficult if you're using a touch screen, it's not that bad to go to all of these tabs. And you can see an example here. This is me launching Firefox on Linux. And you can see some of the connections it's making. So it has a detect portal service that Firefox has so that if you're in a hotel Wi-Fi or something and you're in a captive portal, it can detect that and do a pop-up. So right there, you're launching it. It's making a lot of connections to mozilla.org and firefox.com, both to connect to the portal and do other things. Yeah, you can see some examples there. You can also customize your rules. So here's an example from the rules tab. You can see a list of some of the rules that happen to be in place. You can see that there are temporary rules and permanent rules that are separate there on the left side column. And so they organize it based on whether it's always there or not. And so the temporary rules are useful. If you, what I find the most useful for is if you have a pop-up and you're not really sure what to do about it, and say you just say, I'm just going to be safe and deny. And then the program you're using doesn't work. You're like, oh, I actually needed that one. OK, then you can go back to this rule and either delete the rule. Or you can say, you know what, I want to modify this and make it to always allow this thing that I was blocking because I'm going to want that in the future. So this tab allows you to do that. Here's an example of a warning window. So this is a case where I was making a connection to a local service running on port 8080 instead of 443, which normally all of the services that I allowed firefox to use are on that. This is an example of it complaining. So this is what it looks like. Now, this is an example, a screenshot I took from my phone screen. So you can see how the buttons at the bottom have been a little bit squished compared to what you otherwise would see. But I'm going to show in another window how you can configure this a little bit better. But there's a little countdown next to the deny. And this is a configurable number of seconds that will count down until it will automatically close the window and deny the connection. So you have that amount of time to freak out and investigate and figure out what to do. Now, if you hit the little plus button, it opens up an extended window that allows you to more fine tune the rule. It also stops the countdown timer. So if you find the default 15-second countdown to be too stressful, like I did, then you can configure that. Or you can just quickly always hit the plus button. And then you'll open up to this window. And you can see, you can fine tune. You can check which things this applies to. Is it applying to only destination port? Is it applying to a particular destination IP only? Things like that. So for example, say you have a common 8080 port that you want to connect to on a certain host. You could, if you wanted to get that fine grained, you could set that here. And only for a certain host so that if another website tried to connect to port 8080 or Firefox tried to do that somewhere else, you would see it. So. And again, then I think about this as something else I didn't point out on this alert. You can see that the alert is showing you the full path to the executable that is generating this alert. And if it has an app icon showing you the app icon, which is pretty handy when you get a pop-up to kind of figure out what's causing this alert and identify. And otherwise, the fact that it has the full, on the full path to the binary is also very useful so that you know, is this legit Firefox? Is this some other weird application that has the name Firefox that's running from my home directory or something like that? You're going to be doing a lot of training when you do something like this. If you've ever worked on an intrusion detection system, then you know that security tools that have allow lists and deny lists, generally speaking, have some period of training where you tell it what's good stuff and what's bad stuff. This isn't necessarily a beginner tool as it stands today. If you are sort of new to network security and aren't really sure what an application should be requesting, this program may not, as it stands today, be for you for that reason because it's going to be asking you a lot of questions about what to allow with the deny and training. It can be kind of a pain at first because when you first install it, you're going to use your computer and start getting pop-ups, but all this stuff. Some of them you won't get to in time and then they will automatically deny and they're like, why doesn't this thing work? The common refrain whenever you set up a firewall or network security for the first time, especially if you've done it for a network of users, is something that didn't work. I guess I need to turn off the firewall and see if it still works. If it works now, you're going to be faced with all of these things. For that reason, you have to sort of realize that probably for the first week or two when you're installing this application, you will take about that long to train it just because you're not necessarily probably running the same exact application all day every day. About a week or two's worth of workflow, you will start to, all the common rules, you'll swap those away quickly and all of those will go away and you'll stop getting bugged out, those connections, but then you'll launch some application, you only launch every two weeks or something and you'll, over time, yes, you have a question. Okay, so the question was, is there a way to import and export rules? For example, to the cloud or some other service? And the answer is there is. All of these basically are, because it's Unix, everything is a file. And so there's a bunch of files in a place and you just sync those files to another place that then syncs it back to the same place in a new computer. Okay, so for people watching the stream, we had a soda can that just exploded that was in front of the, in front of the projector. That was exciting. Is, is everybody okay? All right, I, I, I mean, I, I don't even, I can't even come up with like some sort of funny thing to that right now. I mean, I'm just, I'm, okay. All right, we're okay. Everyone in the live stream, we're fine. Yeah. Well, and I guess I either passed or failed because I didn't just immediately, you know, and, all right. So yeah, it takes about a week or two to train. Let's see where it was. All right. Oh, perfect segue. Pop-up can be distracting at first. That was great. And that was awesome. So everything's now going to feel like it's based on that. Increase the time, the full timeout to 25 seconds because 15 seconds is too long. But yeah, what you'll notice is that the full timeouts about 15 seconds. I found that's not enough time for you to see the alerts stop what you're doing, go to that window, look at the alert, inspect what it's asking, make a conscious decision of whether you want to allow or deny and then act on that. Sometimes it's not even enough time to do all those things just to pause it. So one of the first things I recommend is to change the default timeout to 25 seconds because I feel like that's enough time for you to do something about it. And if you find that that's still not a reason to whatever you want, but that's a configurable thing. But that's one of the first things I would recommend you do. I believe the default duration, if you say allow or deny, the default duration is not forever. The default duration is until the next reboot. The idea there is you can try out this rule and see if you like it knowing that it can go away. It's also probably the person who created this was probably a network admin. I have a feeling because traditionally, like if you're working on network equipment, you can make changes that are only for the running config so that if you screw up and break your network, you can just reboot the machine and you're fine. So that's probably why. Anyway, I recommend that you change this to about 15 minutes. I feel like that's enough time for you to see the effect of the rule, but it also stops you from having to then dig into the open snitch UI to change the rule further. So in particular for deny. So I found a lot of times I will get a pop up or like a temporary allow. You might get a pop up that for the most part I want to block this type of traffic, but in this case I want to allow it this one time, but only briefly. So you may say this makes you feel more comfortable doing an allow because you're only allowing that traffic for 15 minutes. And then afterwards you can, it will go back to normal and block anything in the future. I also recommend you start with all of your frequent apps that you use all the time to sort of launch one, use it, create the rules for that, launch the next one, use it, create the rules for that and go through all the ones used most frequently. That way you have a baseline because otherwise, if you get some sort of odd pop up that you really should respond to and notice, it may be flooded in a flood of all these other pop ups for the apps you haven't set up yet. When in doubt, I say deny something. You get a pop up and you're like, I don't know what to do. I say deny for 15 minutes. It's very low cost. It's cost is low risk. If something is persistent and keeps asking, you will keep getting a pop up. If it's not, you won't. If denying it broke something, now you know what broke and you understand why it was making that connection. You can go to the events tab and I showed that in a previous slide and see all the block traffic. What I like to do is go there and by default, it doesn't have this filter applied. There's a filter where you can say, show me all the traffic that is blocked only. And to me, that's what I try to set it to and leave it, because what I'm most curious about when I'm looking at events, if someone's like, yeah, could somebody help that person come in? Thank you very much. I appreciate it. So I like to go there and trigger the filter that says to show all blocks traffic is usually when I'm looking at events, I want to see what did it block for me. So that's usually what I care about. You have a good question. Okay, so the question is, is there any sort of notion of like a training mode where it doesn't block traffic, but you just sort of use your computer, because there's other, there's other tools that do this sort of thing, right? Yeah, that you build a baseline by just using your computer, and then you can bring that in. I don't believe that there is yet for this. Yeah, I could be wrong, but I don't believe there's just a like a log only that you can then feedback in yet. But that would be a good feature. So future plans for this project. We're working the package in pure like I said, there's some challenges due to build dependencies and just Debian being Debian, getting everything in there. But in the meantime, you can install the packages manually. Even if I were to install like include that in the OS, and even if I were to install it by default, I probably wouldn't have it running by default until the use of use improves for people who are more advanced users who are familiar with network security to some degree, this isn't that this isn't very challenging, not that challenging to use, but even then it's kind of annoying at first. So I wouldn't want to put an end user who isn't as familiar to who's the beginner to this with a bunch of pop-ups saying scary things, because we've seen how well that works in Windows. When you scare someone every time any application tries to do anything, what happens is either people are freaked out and do nothing, or they just say allow by default. And we don't want either one of those things, right? So the idea I have is what I would like to do is collect reasonable default rules into some sort of optional package that someone in addition to installing OpenSnitch can install OpenSnitch rules that have common sense, reasonably secured defaults for things like Firefox. We know Firefox is going to want to connect to these sort of ports. We know that these applications are going to want to do XYZ. This isn't very different from like a lot of sort of container security things where you have common profiles for known applications. It's that sort of thing only applied to the network access. Possibly, and this would be a pretty quick request after that, would be to have groups of rules. So we have sort of a relax. You're not going to get bugs that much set. A medium security and maybe a strict, because one of the first things that's going to happen is a disagreement about our sense of what should be on by default, whoever's packaging this, and someone else who has a different threat model. So if you have someone who has a high threat model and views everything like a spy movie, they're not going to be excited about allowing anything, or if so, very minimal things. So only it's going to be hard to please everybody with a default set of rules, but if you have at least a slider as it were, that might be one way to approach that. Okay, so question. So additional resources. You can always go see my slides again at that first link, and then I have all of the different things that I reference in here, all of my notes. And let's, well first, thank you very much for coming. I really appreciate everyone being here, and everyone gives themselves a round of applause for finding the room, so congratulations. All right, so any questions? Anyone, yes? Okay, yeah, so the question is, if you have an application that then has a known root exploit within it, correct me if I get this wrong when I say it, that has a known root exploit in it, can't that exploit then as part of the first things it does, this is similar to what anti-virus, like people do anti-virus platforms too. Can it first thing, kill open snitch, and then do evil things? Is that more or less, yeah. So I suppose if it could do those things without doing anything on the network first, so if it could escalate to root from user space, like usually these things come in as a user and then you escalate using some sort of exploit. So if the exploit can then escalate to root, is open snitch aware, so that it can then kill open snitch before it talks to command and control to get a payload, if it's designed in that way then yes, it could bypass this. Now a lot of malware, what they will do is they're relatively minimal and one of the first things they do, and again this is the generalization, but they will often have the payload locally and then connect to command and control somewhere to download whatever specific root exploit they need for that system because it's hard to have all of that in your backpack, I guess, for all possible systems you might be on. But some of them don't, some of them include everything they need, so it just depends. So the follow up was escalating to root isn't that challenging if you can install a keylogger that logs what you're typing in the bash with the assumption that you're gonna capture sudo passwords, is that right? Okay. Yeah, so the assumption is if you have a user who's doing a lot of things in the terminal and therefore using sudo in a terminal then that would be true, I guess, if a user's doing a lot of terminal work that where they're using sudo, if they're using a GUI application, like GNOME has things that when you need root to replicate root privileges that will do a pop up, basically, it would probably have to inspect that. But yeah, your point's made, if you can listen to the sudo password then potentially do that. All right, other open cinch questions, yeah. Okay, for the stream where the suggestion was to make bash immutable to prevent that kind of exploit. Yeah, so it's probably best not to go down the rabbit hole of all the possible mitigations for root exploits on Linux for this talk. We can certainly talk about it after. So, any other open cinch questions, yeah. So the question was, could you use open cinch to look at the payload and not simply the metadata about the network traffic? I imagine it would be possible if you can do that with Netfilter. If you might be able to modify it, if you can use Netfilter to log not just the metadata but also, I don't know if you can do that with Netfilter where you can get the entire payload without doing something like a TCP dump-style sniffing. That'd be a good feature, though. Another question, yes. That's a good question. So the question was, what about if you're running everything out of containers, say a SNAP or something else? There's a similar problem with scripts that are, it will not necessarily be able to discern if the binary path is the same for all of them, the launcher is more or less the same. I don't believe it makes determinations based on arguments. So, for example, this is a problem if you're doing things from like Python, like you have a straight up Python app, and you probably notice that if you're trying to do restrictions in other ways to lock down a Python application or other applications where it's like an interpreted script, where you have to apply the rules to everything it could possibly run, and you don't necessarily wanna do that. So, yeah, I don't believe it would allow you it would apply those rules to all containerized applications unless I guess you either we modified open SNH to be more to take the entire command line into account, which it might, but I don't think it does, or have the binary executable have a different name. What do you mean? So the question is, is this equivalent to a Swiss Army net set? So what do you mean by that? I see, so the question is, is it like a Swiss Army version of net set in the sense that with net set, you can see all the current connections on your system and the current state and any established connections. Kind of it's a little different in this way. And with net set, you're seeing what's happening right now, and that's been allowed to happen. So yeah, I mean, the Swiss Army in the sense that it allows you to do extra things, yes, but it's not, I don't know that when you look through the rules, it's not necessarily, I guess if you look through all the allows, you could see a history of all the things that allowed through that it would have blocked, and you can see the things it blocked that it wanted through. But the other difference is that it's stopping before it's allowed through. So when it first initiates the connection, it's not going to establish connections. It's also showing you the ones that are starting to be initiated that have, and it will block it until you allow it. And that's the other reason like the timeout is an issue. Like I said, it's a 25 seconds, but some things, when they want a network connection, they want it relatively fast. And so if they don't get it within a certain timeout, that's why I said the 25, because I figure a lot of things that if it's not instant, it's like 30 seconds or something. So I thought that might be safe. Yeah, so the comment was that what it's going to do is not show you the current state of the connection, but more that it's, what that something connection was trying to be established and should I deny or block it. Any other, yes, another question. Yeah, so the question is if the destination, so it tells you the out, the binary making the outbound connection, will it also show you if the destination is a local process? It doesn't show you the local process. What it would do is if it's a socket, it will show you what the socket file is. If it's a local port, it would show you what the local port is and then you would have to do that, like use a select net stat, to then figure out what process I believe would be listening to that local port. I don't think, yeah, I don't think that it expands it out other than here's the socket or here's the port. Another question, yes. So, no, no, no, I'll summarize for the audience. So the question was because it's not doing introspection of the pack, it's not doing like deep packet inspection, it's simply looking at metadata. This process is trying to connect to this remote IP at this remote port. Once you create rules, you could have a circuit sense for instance where Firefox, if you give it a blanket rule and say allow all on external 443, then you still could have some privacy risks because you're allowing anything, as long as it's using that same port. I mean, all of us who have been behind corporate firewalls know the trick of setting up a service that's listening on a common port somewhere that you know is gonna be like allow listed everywhere, right? So, yeah, so if you make your rules, so the answer is yes, if you make your rules very broad, that's the problem with making very broad rules. And so you have to look at your threat, your threat modeling the slide. How strict do I wanna be with this? How do I wanna make it where I do per remote host basis, which could be kind of a pain? Okay, yeah, so the question is, is there a analog of open snitch that's not focused on TCP IP network traffic but other networks like Bluetooth networks or cellular networks? Not to my knowledge. If there is some sort of, I mean, it's already pretty, taken a while first to have something like this just for network traffic, right? So, that would be great. So if you're listening internet, then please consider writing that tool. Yes, yeah, so the question is, does it currently have the ability to have custom hooks and actions that it performs other than allow deny based on the information about a connection attempt? To my knowledge, it doesn't. To do the thing that you may want to do, there's other tools like basically portknockers, which people do when they don't wanna secure SSH the right way, sorry, that's a troll a little bit, but I'm gonna get it at the end here when you guys come up to me, I'm sorry. But anyway, a lot of people will do that because they forget that the port you're connecting to is not a secret and so it's sort of, it's very easy to look over the person's shoulder when they're doing the combination lock. But you can use that tool in your case potentially because you can trigger events based on the connection of the port. The downside is that you have to have that, I think, listening on that port to detect that. Yeah, okay, well, I'll take all the other questions either probably out of the room or whatever. Thank you so I can get ready for the next talk, but thank you so much for coming, I really appreciate it. Test one, two, test one, two, sound good? Can you hear me now, Cal? Is this a good, is the audio good? Okay, so, hey everybody, thanks for coming. Today we're gonna talk about using your coding skills and transferring them into something different, into CAD. So, let's start out and I just wanna kind of get an idea of who I'm talking to. Who here codes on a weekly basis? Raise your hand. Okay, who here has never coded? See people? Okay, fine. We're gonna be talking about a lot of code in this, but it's all visual, right? So, don't worry so much. I would be really interested to know how it would work for somebody to learn coding for CAD. That'd be really interesting for those who haven't actually coded a whole lot. So, first let's go over the agenda. We'll talk about the general CAD landscape. What's out there? How are they different? How are they licensed? What are they cost? Let's talk a little bit about parametric design and what that means and why it's important and why you might wanna learn how to use it. We'll go into OpenSCAD and then we'll talk about taking a model into a physical thing through 3D printing. And we'll go into Q&A and that's it. Hopefully I'll leave a good chunk of time for Q&A because it's gonna be a lot thrown at you at once. Let's go into one of the topics I know most, me. So, I work for Amazon, for AWS. This talk has nothing to do with my job though. I am not here representing Amazon. So, do you understand this? This is my hobby, this is something I like to do. When I'm done for the day, this is sort of things that are really fun for me. If you do wanna talk to me about Linux, I work for the Amazon Linux distribution and the bottom rocket distribution for Amazon, we do have a booth. We actually are giving away a 3D printer. So, sign up for that, it's free, just get a ticket, you have to be here tomorrow. And let's see here. I have been coding for a very long time. The majority of my life. I started very seriously when I was 10 years old coding. I probably shouldn't have been out riding my bike, but instead I was doing things like writing compilers and stuff like that. That was what I found fun at 10. I'm sure there's people in the room that are in a similar category. But that's what I did for most of my life and I've determined that I am a habitual coder. If there is something that can be programmed, I will program it. It is just like what I do. I've been using CAD for a slightly smaller amount of time. I started when I was 18. I had a career desire to work as an industrial designer. Industrial designers are those who like design like the outside of a phone or a laptop or the case of a TV or something like that. Various reasons I didn't take that career path. But I did learn CAD to kind of understand if I wanted to do that. So I learned CAD when I was 18 and that was way more than 20 years ago. Now, so those are two things that have been kind of constant in my life. So when I kind of learned how to combine them, it was really interesting to me. I pick up 3D printing during the pandemic. I think you would define me as like a maker. I like to make things, physical things. Before the pandemic, I would go to a hardware store, the plumbing section or to like the nuts and bolts and screws and find what I needed. But during the pandemic, that was not something I could do or wanted to do. So I said, what if I could just design everything myself? And I have been looking at 3D printers and thought this might be a good time for me to pick this up. So I bought a 3D printer and I started making everything I needed. From brackets for my window blinds to things to organize my desk. I just started anything I could design I would kind of habitually like I would do with programming. So that's been my pandemic hobby. So that's me and kind of why I'm here. Let's talk about like the CAD landscape. Now, CAD is one of the first things people started to do with computers, right? My uncle was a civil engineer and I remember going to his office in the early 80s and seeing these massive machines replacing very rapidly drafting boards they use. I really loved when I was a kid that they had mechanical erasers. They were these erasers that had a motor on them and you could erase anything with them. And I was so sad when I learned that they didn't have them later when I visited the office because they moved everything to computers. So it's one of the first things in the early 80s we were actually moving very quickly to design something. As you can imagine how hard it would be to design something complex like a bridge on paper when you could design it inside a piece of software, right? It was a really early desire. So over the years we've kind of gone down a few different ways of building software. Right now 3D modeling is the way to go. We're very capable of writing efficient 3D modeling software. One of the most common ones you'll see and kind of like there's a lot of specialized CAD software but one that you would see a lot right now is Fusion 360. Has anybody heard of that? Do you people? Yeah, so Fusion 360 is pretty good. It's written by Autodesk which was the originator of AutoCAD which is one of those early CAD software. It's pretty expensive for a hobby that a lot of people use. It's I think $400 a year. I'm from Canada so it's more in my money, my fake Canadian money but it's $400 a year. And that's a lot of money especially if you're doing something as a hobby for example and you decide for a month that you're not gonna use it. Like that kind of sucks, right? You're actually not getting full advantage of your software because it's a subscription model. But it's a really neat piece of software. It can do lots of things. It is a point and click largely a piece of software where you will go and you will draw something. You can specify numbers in it but you would draw something. Similarly, there's SolidWorks by DissoSystems. It is even more expensive. It is used in mechanical engineering a lot. I have used very briefly some of the demo versions of it. It's impressive. It's hard to use as well and it's a similar subscription model. SketchUp is another piece of software. I've used this one pretty extensively. I designed a house and a series of cabinets. And SketchUp, last few years. There's a free version of that that you can get. It's web-based, it works pretty well but there's lots of restrictions on it. If you wanna buy it, yet again it's I think $400 a year. You can get some discounts and get it down to like $199 I think. But yet again it's an expensive thing and if you're just wanting to like big investment up front to learn how to use it. And there's restrictions on a lot of these pieces of software which you can do with your own files, right? I'm an open source guy, I hate that. I get a piece of software I wanna do what I want to with it. Preferably I get to see the source code because I can figure it out. Another one that people might have heard of is Tinkercad. I use Tinkercad a lot, I still use it. It's by Autodesk as well. It's free, it's not free and open source but it's free, it's web-based. But it's something they teach to small children. They use it in elementary schools. It is pretty effective for that. You can do a lot with it but it is once you start getting complexity in your designs it's really hard to actually be productive with it. Now we'll move into the free and open source alternatives. You have FreeCAD. FreeCAD is a point and click with some parametric design. We'll talk a little bit more about what that means. It's available for free. It's actually a pretty good piece of software. I've used it a little bit. And then OpenSCAD which is not point and click at all. We'll talk about that in much more detail later on. But it is free and open source as well. So there's a lot of different things out there. I'm sure I've lucked out your favorite CAD program. But as you can see you have a lot of options. Some of those options are ones that cost money. Some don't, some are free and open source. So I've tossed this word around parametric design. So parametric design is very different than what you might be thinking of, right? Most people think of CAD. You think about manipulating this model in 3D and putting points in 3D space. And you are using your mouse more than anything else. So with parametric design you're actually looking at something relating to a perimeter or a mathematical or a statistical variable, okay? So what does that mean in context? With traditional CAD it's a combination of clicking and placing values, right? You will go in and you would say I want a line that starts here with your mouse and you would say it needs to be 30 inches, 30 centimeters, some sort of unit. And then I want another line that's 90 degrees to that so I would then go and put it 90 degrees and you would have strains on those lines, so on and so forth. So that's traditional CAD. A lot of drawing techniques, right? Whereas parametric CAD is where you have a complete or full reliance on specifying and calculating positions. Effectively you're coding your shapes, okay? So you might think that's gonna be kind of hard, like if I can't move my mouse around and place something on the screen why would I write code to do that? Well, I'll give you a good example. So let's say you're designing this remote control, right? But you make a prototype and the prototype doesn't work very well because the buttons are too close together. In traditional CAD program you're gonna click each button. You're gonna like drag some things around. You're in justice spacing and you may even have things like a virtual measuring stick or measuring tape that you're gonna look at each point and try to do that. That's the way, for example, SketchUp works. It takes a long time to make changes like that. It's very trivial that you just want a different spacing between your buttons but it could be very complicated. However, if you're changing a button in a parametric design, you would just change that variable just like you do in a piece of software. So actually I'm gonna show you that. Let me just switch over for a second. It'll be tricky. Okay, so we have this. This is the design that we were looking at before. And I wanna just go in and change the button spacing. It's currently 1.8. Let's change it to three and you can see everything just changed, right? I just changed the perimeter on it. I want a wider remote. It's 40 units wide. I could make it 45. And we can see we have a slightly wider remote. I could change the height to the button. So it's nine right now. Let's make them 11. Fund me too big. Yeah, it's pretty big, right? So you're just making these changes and effectively I've written software that defines the shape of this model. So that's in essence parametric design. Now these pieces over here are reflective of just variables that I've exported into this boolean. And I'll show you how to do that. Let me switch back over to presentation. So it's not a panacea though. So let's say you have this organic shape, right? Let's say that you want to model this. If you're gonna try to do this in code, it's going to be a pain, right? Because you have this organic shape. It doesn't naturally have order to it. So you're gonna probably plot hundreds of points. You're gonna compile that, and then you're going to have to do it over and over and over and over again, right? It's gonna take a long time. So in this case, if you're using a more point and click system, like for CAD, it's probably a lot more efficient if you have these type of organic shapes. So parametric design is not a panacea for every problem that you face. But for a great number of them, it's super useful. So let's go into OpenSCAD. OpenSCAD is the tool that I really like to use. It's a domain specific programming language for specifying geometries. So basic concepts are like a lot of programming languages. You have variables that have types. This is something that you've probably seen in any coding that you've done. Those variables are like things like numbers, and they're all floating point numbers, booleans, the true and false strings, which are just sequences of characters, ranges, this is a little unusual. This is like from one point to another point. There's a range. We have vectors, which is a series of other variables, mostly numbers that can represent any of the variables that I have on screen, and then undefined to represent an undefined variable. So this seems straightforward. Most programming language looks something like this. OpenSCAD is actually very similar to a functional programming language. Not a purely functional programming language, but it has a lot of things about that. So you have functions, obviously. They take in parameters and return values. They don't actually draw anything though, so that's something that is a little unusual. So you're using this mostly to calculate things. And then you have modules. And this is something that doesn't, when I think of a module, I think it's something very different, so this is important. Modules basically draw geometry, and then they have children, but it cannot return anything. Functions in most programming languages return something. Modules are separate here, because they don't return anything, but they have this concept of children. And children cannot modify their parents, but parents can modify their children. So if you think of that kind of like relationship. Okay, so I'm throwing a lot at you right now. How does this look though? So this is an example of an OpenSCAD script. This probably looks pretty familiar to most people. You have a lot of common things. You have equal signs, lines ending in semicolons, parentheses to do different things. Let's walk through each piece. So first up, you have a number here. This is very straightforward. This is a floating point number. And it is stir. Specters are denoted by square brackets. This is actually the dimensions of a credit card. So the X and the Y and the Z. The Z is height. And one important thing to know about OpenSCAD is that it's unit list. So your units can be whatever you want them to be, but you just define them in the software as arbitrary units. I'm using millimeters here, metric system and all that. So it is assigned to a variable. So it's credit card dimensions. So we've done that. So there's a lot of pieces going on here. Then we have a function. This function basically is going through and assigning a diagonal position. It's taking in two parameters. N and then dims are dimensions. And it's returning a vector. So equals is on the next line, but that's just for stylistic reasons. It's not for any programmatic reasons. So we have the first vector is dims X times N. Dims Y times N. And then zero. So this X and the Y is actually kind of interesting. The zero first and second value in a vector can be referred to as X, Y and Z. In OpenSCAD. Just for short hang, because a lot of times you're manipulating something in a three dimensional space. And then we have a module here. The module is named card. And it has a built in module that it's calling. That is a child of that called cube. And it's gonna draw some geometry. And you can see I'm passing in to the built in module, the function that we had before. So this hasn't been called yet. This is just defining the module. And then on the second line, we're actually defining that module. So it says card parentheses. And then we have this translate. Translate is another module. And basically translate is changing the position in three dimensional space. You can see I'm passing in, again that a diagonal position. And so I have a one and a two. Calling it three different times with different pieces. So what does this result in? This would result in basically three credit cards kind of in a diagonal line, right? I'm calling that credit card three times and changing the position. So that's actually worked pretty easily. It's pretty straightforward. However, it's not very good code because I'm repeating myself a lot. Let's kind of clean this up a little bit. We'll use some iteration. So I'm using a four statement, just like you'd use in most programming languages. And so here we have I one colon one colon 10 that says go from one to 10, but in the middle one is the step. So we're stepping by one. I could say two, three, four, five, whatever I wanted to. And then I'm going to repeat the card over and over again. So it's gonna be very similar because I'm using a four statement and entering the I in right here. We're gonna have a bunch more. So you're just gonna have a bunch that are going across the screen. Everybody with me so far? Okay. So now we've made our first open SCAD script. Let's take a look at a couple different things and how it works. You have the ability to do both two and three dimensions. So in two dimensions, you have a number of built-in modules and I'll talk about the three dimensional equivalents of that. So a circle in this case will be a two dimensional equivalent and it has no Zed. It has no height to it. You will never output a two dimensional object. That's not what open SCAD four. You would output a three dimensional object. So you can convert and have different representations of those. So for example, if you have a circle and you do a rotational extrusion of that. So just imagine a circle and you're rotating it along a plane. You're gonna have a sphere, right? Like that's pretty straightforward. Let's say you have a circle and you extrude it. Like you remember that Play-Doh extrusion thing that you had as a kid and you'd push the lever down and just like spread out spaghetti? Extrusion. If you extrude a circle, you get a cylinder. And I will say as well, open SCAD uses kind of these very generic names for their shapes. So a circle could be also an ellipsis or ellipsis elliptical shape. We have a square, which could be also a rectangle. They really break that rule that all squares are rectangles. They actually go in the opposite direction. They call every rectangle a square. That drives me crazy a little bit, but that's okay. So you have a square and the three-dimensional equivalent of that is a cube. You can make sense of that, right? It could be a rectangle or a rectangular prism. A polygon is a series of two-dimensional dimensions or addresses, and then they are put together to create a polygon that are all connected. The three-dimensional equivalent of a polyhedron, so they'll be three-dimensional space that you're putting into a series. And remember when I talked about that kind of organic shape and I said, it's not really a good idea to use open SCAD for it, there are options. So for example, you can import different geometry. So one thing that you can do is create an SVG, which is a vector graphics format. You can import that into open SCAD and then you might have that organic shape already and then you can use it and start manipulating it from there in three-dimensional spaces. So the thing is you can take all these two-dimensional shapes and then convert them into three-dimensional shapes using extrusion. So I talked about linear extrusion, so you can take not just these basics, but you can take combinations of squares, rectangles and polygons. And then you can linear extrude them so that's basically just like drawing them up, right? Or you can do a rotational extrusion. So when I talked about that circle, like rotating around a given axis, like that would be a rotational extrusion. So if you rotate extrude a circle, you're creating a sphere, but you can take many complex shapes and then rotate extrude them. Kind of classic example is if you wanted to like have like a wine glass shape or like a goblet shape, you'd get a rotational extrusion along one axis so you would get just that profile be turned into a three-dimensional shape. Kind of like wood turning if you know anything about that. So just as an example on this, because I think it's a really hard concept to get through. Here's a rectangular prism on the screen. This is what it looks like if I was to write this as a 2D and then linear extrusion. So I have my dimensions defined only in X and Y, and then I have a height variable. So I create a square, this is the child, and the parent is a linear extrusion by height. So I have that square and then I've drawn it up seven. Pretty straightforward. Alternately, you can go directly to three dimensions and just give you the dimensions in three dimensions, so at X, Y, and Z, and then do a cube on that and you'll have the exact same geometry at the end, right? So another great thing that you can do with OpenSCAD is using geometric Boolean. So here I have a small script. This dollar sign FN is a special variable in OpenSCAD. This basically is a number of facets that are in a given geometry. So in this case, the standard is 10. I'm gonna say 30. It's just gonna make it look a little smoother. So it's not super important, but you'll see this a lot. So I have dimensions here. Okay, I don't know what happened there. So I have this here, so you have my cube and my sphere. I'm gonna intersect those. So this is just like if you're in a database, for example, and you're intersecting data sets, right? But you're using geometry instead. So you get this kind of like sphere that's slightly larger than the cube here. So you're gonna kind of get like the intersection where both of those, and effectively, where both of those geometries are put together. So you kind of get this rounded over kind of dice-like shape. You can do difference with the exact opposite of that. So if you were to put these two together, you'd have just a cube, right? And then you can do a union, which is where you have both the shapes put together. Now, unions are implicit in OpenSCAD, but there are cases where you'd wanna do an explicit union like this, especially if you're like controlling color or doing some special slicing and that sort of thing. So there are cases, but unions are oftentimes superfluous. You don't really need them to create that shape. You just do the square. If I was just to call without the union thing here, just my cube and my sphere, it would have the exact same geometry, so. Okay, so there's other things you can do with these besides that. You can actually, there's built-in modules where you can actually translate. We talked a little bit about that. Hey, you got a question? Yes, they are modules. Yeah, so you cannot pass modules as parameters because they don't return anything. So that's why you have children. So children, the curly braces here are optional, are required for, we have two, but optional if you have one. Yeah, you can't pass modules around. I wish you could pass modules around. Like that was one of my biggest hurdles to get over, just realizing that there's different child, children that are involved there. Yeah, so if you were just to do this, so if you look at my cube here, it's just really a shorthand. So I could easily put in cube right there with the dimensions into it. So you're passing in numbers into a module. You're not passing in another module. That's an important step. Thank you for bringing that up. Yes, it's a little bit of a mental jump. Yeah, I get that. So we talked a little bit about translating objects already where we're moving something around in a three-dimensional space. You can translate something into X, Y, and Z dimension. There's other ones that can kind of move and alter around different spaces. So you can rotate something, right? So instead of a rotating extrusion, that's just rotating the object in space. You would say rotate it, and you would have actually three or two, if you're working in two-dimensions, vector values that you pass into it. So if you wanted to rotate it around the Z axis, you'd say zero, zero, and then the angle that you want to rotate around the Z axis. You can scale. You can scale. So scale is actually taking your module and then multiplying it's size by a certain value. So if you wanted to make something twice as size, you would say scale two, and like my cube, for example, that would make a cube that is twice the size in all dimensions, right? And that would actually be two, two, two, because you want to put in a vector. You can resize something, so that's where you pass in a new set of geometric constraints that you want for it. So you would say, let's say that you had a sphere if it is two units in diameter, and you scaled it up to where it was two in the X, four in the Y, and then two in the Z, and then you'd kind of get like a elliptoid shape, right? It's kind of like stretching it out, resizing it to your given constraints. You can mirror it, which turns it across an axis. So if you have something that's translated off to the side and you mirror it, it would go on the opposite side of that axis. So from positive to negative, and it would be the shape would be facing the other direction. And then multi-matrix transforms. Multi-matrix transforms allow you to use a matrix of numbers to transform and do multiple transformations all at one time. So you can do scaling and transforming and resizing and all these things by just specifying a number of numbers inside of it. Now, I don't tend to use this because it's very hard to read and understand what you're doing. I'm doing this for myself. There's no efficiency gains, really, for doing it this way. So it's there. If you're familiar with multi-matrix transforms, it works. And of course, you can set colors. Colors don't actually translate into the given exported geometries, but they are there for debugging. You can change shapes. So for example, here, I'm doing an offset. This would make things bigger by a given kind of, it's like kind of increasing the outer width of it. So I have done an offset by 0.5 units. So it's taking the first triangle and made it 0.5 units wider in all dimensions. You can haul where you have two different objects. These are both children of the haul and it's going to geometrically attach those two objects where they kind of create one solid. And this is the one coming up that is really hard to understand, a Mankowski transformation. So if you look at this red object at the top, you have a rectangle and you have a sphere. And imagine rolling that sphere over all different sides of it and then you get that yellow space below. It's really, it doesn't work in the ways that it's very, like sometimes I do them in Mankowski and I'm like, I didn't think it would do that, you know? But it is a way to create some shapes in a very small amount of code that's hard to create any other way. So that's the basics of the entire language. You can take these pieces and put them together and to create almost any shape. I'm actually writing code in it. There is a built-in IDE. It all works in a single window. You can also compile from the command line. I have a very bold hate for this view. I don't use it at all. So what you can use is, I use VS Code and there's a plugin built into it and you can write in VS Code on one window and then like on the next monitor you can have your preview screen and then every time you save in VS Code it'll be brought up on the other monitor. You get immediate feedback and you get your familiar coding environment. So, that's open SCAD. I've spent hours and hours and hours doing so much with this and it's been a very rewarding experience. But what most people wanna do with this is actually take these and move them into a physical thing. So you've spent hours designing this model. You've looked at it. You know what it's gonna look like. How do you get it from on your screen to the real world? Who here knows about 3D printing? A bunch of people, good, good, good. So, we're gonna talk about kinda the pipeline. For those of you who don't know and we'll talk about like some of the open source implications of some of these things. Generally you're gonna take open SCAD. You're going to compile and render this out and then you're gonna create an STL. The STL will be sliced and turned into G code. The G code then will be to your printer and then you get your physical thing at the end. Right? Now along the way, one of the things interesting for me is I use a fully open source process for this. Everything I use in 3D printing and actually even the 3D printer I use is all open source. So, I use open SCAD. It is open source as well. I use Cura which is open source as my splicer. I use Marlin firmware on my 3D printer. So these are all open source pieces. So slicing, you take an STL. STL, I looked at the history on that. It stands for Stereolithography. It's a very old format from I think the early 90s. And basically that is the three dimensional shape. It is converted into a number of layers by the slicer. So if you think about like taking whatever shape you have say a teddy bear and you're taking it in three dimensional space and then you're slicing it into a number of slices or like kind of like a CT scan. Have you ever seen a CT scan before where they're kind of like scan your body and you see it like your brain and eyes and that sort of thing? It's kind of like the inverse of that. Each layer is then turned into lines. Those lines are specific to your printer and the resulting G code comes from the C and C world that file format does. It's a standard RS 274. For this I use tools. I use Cura which is the LGPL C3. You can find the source code here. It's by Ultimaker which is a 3D printer manufacturer. There's also the Prusa Slicer kind of family of those. There was a slicer called Slicer with a three in the middle. So you can see that there. It was forked into Prusa Slicer which is another manufacturer of 3D printers and it was subsequently forked into Super Slicer. It's a GPL 3.0. You get the source code here. So once you've got the G code from the slicer you're gonna wanna get onto your printer. Good thing is to take it from your slicer, save it to a micro SD card, get up out of your chair, walk over to your 3D printer. Within your 3D printer, go to the LCD on the 3D printer and initiate the print. That looks fine, most people do that. But I would say one thing that's better is to do the Octo print which is a Python based web interface for 3D printing. It runs on like a Raspberry Pi that's connected to your 3D printer. Could be other single board computers or at laptop or something. It's a GPL 3.0. And it's got to be your USB, so that is basically commanding the printer to do different things. It's sending G code directly to the printer instead of using an SD card. But it has a lot of great plugins. What it allows you to do is monitor your prints and then like you hook a web cam up to that Raspberry Pi and see how it's coming along which saves you a ton of time running up and down stairs, potentially that sort of thing. You can monitor the bed level if you have a level sensor on your 3D printer. You can then see exactly where you're out of level. There's even AI techniques that you can use to see if your 3D print has failed called the Spaghetti Detective. I don't use that. My prints don't fail, I'm not getting half my fail. So every printer has a main board. Main boards actually are eight or 32 bit microcontrollers that actually descend from Arduino. The early ones were Arduinos with like stepper drivers. Now we've moved on a little bit from there. Let's give our G code into movement and for your accessories and your axes on the 3D printer. So the firmwares are not super complicated but they are important. For example, Marlin is what I use on my Ender series printers that I have. It's GPL, you can get the firmware here. RipRap firmware, RipRap was the movement that kind of started 3D printing in the home. They have their own firmware that people still actively use. Usually I'm called suet boards and you get the firmware there and then kind of the newer, come to the scene is Clipper. Clipper is actually kind of advanced firmware. The firmware that goes on the 3D printer is kind of dumb and then the firmware that is on a Raspberry Pi is very smart and it's kind of like a different model. You wouldn't need Octoprint in that case. So that's all that I had. I want to give a good amount of time for Q and A on this. Who did I confuse? What questions do people have? Go ahead. Yeah. The customizer? Yeah. So, let's do that. Bear with me. I've set up a multiple monitors here. Okay. Let me actually, so I'm not craning my neck. Let's see if I can get this all to mirror. Just bear with me for a second. Just a second. Struggling with monitors. Hey, here we go. Okay, so I'm gonna bring up the editor. So as you can see, here is the variables that are represented over here. It does some transformations. So it will take an underscore and convert it to a space. So I have this on each screen and then if I was to change this and recompile it, it would, let's say, reset this. We'll see those values come back. So if I was to compile this to now, say, 46 and then preview, we can see the customizer has changed over here. You use comments on it to make further things. So you can actually mask out things. So I don't use do this very often because it's mostly my own work. But if you were to not wanna show the fn variable, there's actually a comment that you can say that I think it's hidden, that will hide it from being seen. If you use thingiverse, you can actually export these into thingiverse. I don't like thingiverse personally. I have some objections to how they run the site and it's had many exploits. So your data is not very safe if it's in thingiverse or it wasn't until recently. So if you were to put this in there and there's some documentation in the OpenSCAD documentation about how you can hide it and then you can control also what the constraints are in this through ranges as well in the comments. Yeah. Did I answer your question? Cool. What other questions do people have? Yeah, in the back. What's my favorite filament? So I use, something that you guys can't get. Probably not. I use filament.ca mat PLA plus. Sorry, you could get it but the import fees would probably be very expensive. It's cheap for me. Other questions, so back. So I didn't do the, no I did the house before I was using OpenSCAD. I am designing a garage though right now in OpenSCAD. Sure. So you could be one of those people that you could take their work and bring it in but you couldn't probably share your work very easily. So you can import DXF which is a common export format. So you could bring that in so you could have them like that AutoCAD and SketchUp will all export to that. I think it's actually natively AutoCAD format. So you could do that. So my problem with designing structures with ESCAD is I go crazy. So I wrote a module that will place all the two by fours in the right position and I had to think about all these things and I realized that's not necessary, right? Like I needed to like kind of scale back and that sort of thing. So it's really nice because I was able to figure out what if I changed the size of garage door, right? Could we have an external garage? And then I could say, okay, what does that do if I want to have a two foot space between the garage doors, right? Like if I go to the next line up. So I've explored that. We're in the process right now with the builder on that one. But yeah, you can do it. I wouldn't, I would say though if you went to an architect and handed them an ESCAD file, they would look at you with the same. Question, yeah. So that's a complicated question. So there's two different rendering engines in OpenSCAD. There's C, Cal and G Cal, I think is what it is. I don't remember the acronyms, what they are. But you're doing the finalized rendering with STL. So it's not extremely fast when you're manipulating an STL file. But if you were wanting to like say, take an STL file and get a chunk out of it, it's an excellent tool for that, right? If you wanted to like take an STL file and then put holes in it, that's fine too, right? If you wanted to do any finer manipulation of it, it would be kind of a chore. But yeah, I did go from designing something. One of the things that I've designed that's probably the most popular, I designed a 3D printable referee pie case that like folds up. And I designed the original one in Syncercad, polished it, a thousand people retweeted it, it's crazy. And I redesigned it entirely in ESCAD because I realized that it wasn't scalable to use Syncercad for that. It was like if I wanted to move around, like there was a mounting stud that was slightly off, that was like four hours of work in Syncercad. Whereas when I redesigned this in OpenSCAD, it's like change your variable, done, right? Like it's really free. Yeah, you know, that's a good point. So for those who didn't hear, it's like kind of like, can you be creative with OpenSCAD? It's kind of what you're saying, right? I think you can. For me, one of the things I like to do, my kind of genre in each is creating foldable 3D prints. So I create these 3D prints that start out printed flat and then you fold the geometry up into different pieces, right? For me, I've kind of shifted my mind where I start thinking first that way. And I created a library to help me do that. So it would be impossible for me to know if I was to take an appointment in ClickCad if it would actually fold up. So what I figured out is a way that, and I'm gonna polish it, I haven't yet. It just takes a long time to document these things. But I can then start realizing this won't fold, right? Because I know the constraints, I've put the constraints in OpenSCAD. I know which ways it'll work and it will tell me if I'm going down a bad path. So in that way, I've spent the time to do this and then now I can use it in a different way. So it actually aids my creativity because then I know that I'm not going down a false path. Yeah. One thing I found out that I can do with OpenSCAD that I didn't feel like to do with any other CAD program, I put my line width and I put my layer height into my CAD program, into my CAD software. And then I'll know, like if I want something, I can put an assertion into it that I'm going down a path that won't let me based on the line width. And I will tell you that the STLs that are being produced by OpenSCAD are way more accurate than Tinkercad. And so like to the point where, you know, Tinkercad's like, ah, yeah, I'll put this here. It's about this, yeah, yeah, yeah. So yeah, but it's good points. Like it's, you have to change how you think about things where you often do really use it. Question up here. Yeah, that's a good point. So I have played around with the Python library, which the name is escaping me right now. There's a couple of different ones. There's a JavaScript one, which I think uses the same engine. I think it uses like, you know, it links into the C libraries from Node.js. For me, when I looked at this, I thought I was just gonna focus on this because there's a larger community here, right? There is a very active Reddit community for it. So if you find creativity in writing your code in Python and using these libraries to output it, that's great. I wouldn't fault you, anybody on that. But for me, I just wanna go where the community was. But yeah, good point. Other questions, come back. Oh, that's a good question. I don't know if there's any textbooks on it. There's lots of parametric design tutorials I've seen on like YouTube and things like that. And I will say, I don't want people to understand that you can't do parametric design in something like Fusion 360, you can. It's a totally different model though. Some of the point and click CAD systems, what you're gonna wanna do is specify in like a dialogue box and show that. So the tutorials may be talking about parametric design from the perspective of Fusion 360. From a general perspective stance, I work as a developer advocate at Amazon. I know that writing code is not very good video content. Staring at someone else's writing code is terrible. So some of the Fusion 360 ones have a lot of views and it will get you started on it. But I haven't seen, I'm sure somebody who went to, maybe the mechanical engineering school will talk a little bit more about parametric design. I didn't. So it's not anything that I know of. But the open SCAD documentation is excellent. It is really, really good. So I would start there if you're interested in it. Question in front. Yeah, so that's something people ask a lot. Hauling is your friend. So Minkowski and Hull are what you wanna do to do a fillet in that. So a fillet for those who don't know is you're basically taking and kind of joining and create a gradual transition between two geometric shapes. So you have something, let's say that you wanna have a not exact square corner on it. So a couple ways you can do it. Minkowski, if you have that ball that you're basically rolling around, you can do that. So you would slice out, not with a slicer, it slides visually out, it's the area that you wanna do it. Minkowski that area and then bring it back in. Join it in. And then Hauling. It's more, Hauling's better for chamfering, right? So you take something that's slightly smaller, use that offset function that we talked to, or module that we talked about, reduce the size of something and then have the original size of it and then you would hold the two things together and then we'll join them together in a way that makes sense. So those are your two friends for that. Yeah, so answer your question? Yeah. The one thing that I will say, when you start talking about a lot of these things, you start getting involved deeply in the math, the trigonometry, the geometry of it all. So some of these things you can actually create from just creating a function that will generate the shape that you want, figure out the math, and then just have it plot points as polygons or polyhedrons. Yeah, okay. Yeah, yeah, there's quite a few on GitHub. One thing it's sorely lacking is the package manager. And I have considered writing a package manager and then I realized I prefer my spare time in sanity. So it is sorely lacking that. So you can't find them. There's some really good libraries. I will say many of the libraries for OpenSCAT are very opinionated and they don't mix well with others, right? So you kind of have to go down a path with that. What is it? I think it's a boot or something like that. It's the really common one that allows you to do one of the common problems that you have is like, I want them to be in reference to another thing. So I want it to be, I have a cube over here and I want to the left of that cube to be another cube and then a sphere over here. And well, that's actually kind of hard in OpenSCAT. Like you're translating things that gets kind of messy really quickly. But there is a library that will allow you to do that. However, that library, I found it on GitHub, is incompatible with everything else. Like once you try to have an existing model that you want to add to it, forget about it. You have to redraw everything. You have to redo everything. So it might be worth the time for your specific project to invest in it. Figure those things out. There's also built-in modules that will give you things like automatic things that you would use in kind of like mechanical designs like bearings and different pieces that you can use and just import those in. I don't think those are well, as well documented as the core of OpenSCAT. That's a great point. Okay. I didn't know about that. Yeah, that's great, right? Exactly, like that's amazing. And trying to do that in any other CAD program is like madness, right? Like you would never want to do that. So great point. Other questions? The one thing I will tell you is they restrict your files. Yeah, and I will say that like for me, my wife is a professor of Spanish literature. So I'm nothing involved with it. So I have taken advantage of some of those shortcuts. So that's how I get sketch up for a little bit less money. But I'm an open source advocate. I kind of feel like why should we be restricted by these really kind of draconian licenses that are really designed for professionals where you're saving the money, right? But for people in hobbies, that doesn't make sense. So I'm glad they do have something for free. But I think that their marketing's a little bit like trying to hook people in. Once you invest time to learn it, you're gonna like be a subscriber for life. And there's something about that that bothers me. Yeah, go ahead. No, I really don't know. No, not yet. I'd like to know, yeah. It really makes, I mean, in some ways, it reminds me of like if you've used responsive web design and you kind of like define something and you make your window less wide and magically it rearranges things. It reminds me of that. Which I learned, you know, 10 years ago or whatever. So I think this really does help me understand the world a little bit better even. Yeah, in the back, with Blender. Yeah, yeah, yeah. So glad you brought that up. Blender is really great for digital sculpting, right? So let's say you wanna create a bust of daredevil or something like Blender is the best tool for that and that's open source. Recently there was a plug-in that came out that allows you to do parametric shape in Blender. Now, I haven't played with that. Blender is its own ecosystem to really learn and play with but I haven't played with that. I've been meaning to look at it. I'm not a digital sculptor, like that's not my creativity but it is good to kind of remind me of that. So yeah, I don't think that it's overlapping and in fact, if you look at the OpenSK head website it says like if you're trying to create 3D models for a action movie, this is probably not the right tool for you, right? So I would say if you're the best at something you wanna do use Blender all day long with the parametric plug-in that came out six months ago that's interesting because Blender is very powerful. Got about six minutes left. Any other questions? Glad to stick around. Yeah, yeah, sorry. Printables. So that's my thank you slide. So you should take a look at things with me on Twitter and GitHub but printables which is actually run by Prusa. I'm not actually a fan of Prusa but it doesn't mean that I'm having the printers but they are building a really good community for 3D models on printables. You can find my profile there. I wish that they would have more integrations with SCAD like OpenSKAD so you could just customize the model right from there but they don't have that yet but I've been on it for a couple months now and every time I turn around there's new features and frankly I trust them a lot more than I do Thinkiverse because Thinkiverse disclosed my information in a breach and they just didn't fix it. Didn't fix it. As far as I have understood. So am I just signing up? It's still, you know, the information's out there. I don't like that. So yeah. Inder3v2 and I've just replaced that with the Inder5. Yeah. That's right. Yeah. Slicers are very personal. I, for the work that I do I think there are big missing pieces for Prusa Slicer so I create these foldable prints and one thing I need very close control over is the direction of the lines that you're doing on the initial layer. Cura has controls for that. Prusa Slicer doesn't. So I play with it but yeah, I use an Inder5 Plus which is like a big, but I use the Inder3 up until just a few weeks ago. Yeah. Yeah for sure. So that's one of the problems I think with like printers will all become bundled with software and it's just a re-skin of another piece of software. So like, Creality Slicer is just Cura. Just an old version of it. But yeah, Prusa Slicer, people love it. Interestingly enough, the latest version of Prusa Slicer uses the same slicing engine as Cura. We use the Arachnian engine. So we're seeing them choose the best parts from each one. So which is great. Just at that point it just becomes the interface. Yeah. Okay. Well, hey, thanks so much. This has been my pleasure talking about this. Thank you. Come see me at the AWS booth if you wanna talk about Linux or if you wanna enter to win a Inder5, no, pro, pro. No, just get a little ticket to win. We'll be drawing on Sunday. Thanks, folks. Greetings. Test? Okay. Get this live, yeah. Hello. So my name is Brian Prophet and I am with the open source program office at Red Hat. And I'm gonna be talking to you today about how corporate engagement seems to be going with open source communities. This is generally based on a 2021 study that was performed on our behalf by the University of Nebraska Omaha and the University of Missouri. Red Hat was very much not involved. We did not set any parameters for the study. We did not, you know, weigh any questions. We did not, I don't even know who the exact people who were interviewed for the study were. That information has been kept completely private even for me. We basically just said, we kinda wanna find out this stuff, here's some money, go. So, but we'll get into more detail about that in a second. So what we're gonna talk about today is the actual method of research that was happening with the study. And then we're gonna talk about some things that they found out that were not so new for reasons I'll explain in a little bit. We're gonna find out things they did find out that were brand new and kind of surprising. And then we're gonna go through some of the things that the companies were talking about, like in terms of barriers to contribution, how they want to build community and why they want to build community. And then how diversity played into it and also how business intersection also played into it. And then we'll wrap it up with implications for the whole study and where it's going to go next. So first, the background. So as I said, this was conducted by the University of Missouri and the University of Nebraska, Omaha. And the University of Nebraska, they had already done a study very similar to this in 2011. And at that point it was done to what was really a new phenomenon then where businesses are really starting to engage more and more with open source communities. Now, obviously they've been doing that since like the early 2000s, but by it was really fledgling and it was in fits and spurts. And there was a lot of fun going around. So, but by the beginning of the 2010s, it had gotten to a point where you could do a fairly good survey. And to give you an idea of where they were, this original investigation was sponsored by the National Science Foundation, again conducted by the University of Nebraska, Omaha. To give you an idea of the space there, let's see, the Linux Foundation at that point had a grand total of eight projects instead of the 50 billion that they have now. Flight exaggeration, but close. And the way of the land was a little bit different. Open source had kind of come in, the flood was going down, but businesses were still kind of pulling back a little bit for this. So, but the good news is because that study was there, we were able to do comparisons from that study and the questions that we asked in 2021, 10 years later. The research methodology was done using an interview protocol that basically just asked a lot of open-ended questions. There were 35 interviews conducted. They're open source participants, as I said, Red Hat didn't know who they were, we did not pick them. We did, I will say with one caveat, we did say don't like go preach to the choir. Like don't just go to known companies that are really working well with open source. We wanted a diverse mix of organizations, what they did, where they were, how they felt about open source, really positive, maybe not so much, maybe really not so much. We wanted a good mix. And although I don't know the identities of any of the organizations that were interviewed, I trust that it was pretty, that mix was achieved based on the questions. It wasn't just like one kind of person in each organization that was interviewed. So they were interviewing developers, they were interviewing office staff, community managers, project managers, product managers, CEOs, anyone in the chain. Sometimes multiple people from the same organization so that can compare and contrast how the answers went. They did a lot of qualitative content analysis. As somebody said to me earlier, God bless grad students who did a lot of the legwork here. And then they used the pair coding method to extract themes that were coming out of the answers. And that's where a lot of the findings were pulled from. Okay, so what did they find in 2021 that kind of matched up what they found in 2011? What things haven't really changed or if they have changed, they've even gotten more ingrained into the corporate relationship with open source communities. Okay, so one thing that they saw 10 years ago, or I'm sorry, 11 years ago, was, you know, software use around open source. Again, and this is no surprise, it was being consumed to solve a business problem. And we've seen that over and over and over again in the open source community. The first, you know, the first contact between organizations and open source is generally on the consumption side. They're gonna use some software, it does what they need to do. It may be free as in beer or low cost or high support, whatever they need, but they're using it. It solves their business need. That didn't change. A lot of times, because of the low cost barrier, they were getting free infrastructure. Obviously, Apache web server is a classic example of where you can get software that does exactly what you need and you really don't have to pay for it. So again, nothing new, nothing really surprising there. Leverage development, now this was early. In 2011, this was not something that was happening a lot. It did come up as a notable trend in 2011, where people were using collaborative labor from communities to support whatever the business was trying to do, and also to build new alliances with companies. So you saw that around open stack once Rackspace sort of gave it back out and things like that. So that was coming to the forefront in 2011. In 2021, the results were much more positive there. I mean, so it's basically just a continuation. It was early days for this kind of behavior in 2011, but in 2021, it's pretty much like, oh, the sky is blue. So it wasn't really a big surprise there. Again, an early behavior was around the monetization of open source. People were trying to figure out, how do we make money with this magical open source software? And so companies like Red Hat were trying, they're doing, obviously, from the get go, they were doing support services around open source software. Other companies were trying to jump in and figure out how to do that. Or they would do the marketplace idea where you've got open source software. It might be highly complex, for example. So you build a marketplace of APIs around it as a possible way to generate revenue and collaboration. And then the usual, what pretty much most open source projects do, they try to build the contributed side of their community by acquiring as many new users as possible. And then through attrition, you tend to get people who are highly interested, very interested, and then suddenly they're like, oh, we should contribute. That's a typical pattern. Again, in 2011, this was kind of sort of starting to happen in fits and spurts. It came up on the survey, and it was certainly confirmed again 10 years later in 2021. Okay, so I'm sure that's probably the sleepy part of this conversation, because again, I haven't really said anything here that will blow your socks off. This was just confirming what we'd already seen. So now what do we do when we did it in 2011? Or 2021, sorry. What was new? What new things popped up? And this was a part where, I thought about some of these things in odd and mysterious ways. Okay, so, first thing that they discovered that was different in 2021 that he devolved was the evolution of the corporate culture and how it had become much more friendly to open source communities and open source software development in general. Again, I am sort of falling into the little bit of the linguistic trap of talking about open source in terms of software only. Obviously there are other forms of open source, but for the purposes of this study, we were pretty much talking about software development. So I'm gonna stick with that for now. The legal culture around corporations were much more attentive to risk. In 2011, there was a lot of FUD going around, lots of different organizations were naysaying licenses and the whole, the old GPL versus permissive license conversations that used to happen. And lawyers for corporations did not know, 10, 11 years ago, what was good or what was bad because they just weren't familiar with it. Now they are. And that came out very clearly in our study in 2021. There was a lot of conversations around the blending of internal contributions and external contributions. And the last time we ran this study, there was a big firewall, like we're only gonna develop out here in the community. We're only gonna develop here in the internal and they're never gonna meet. And that is not true anymore. There's a lot more blending. There were some conversations in the study that we did last year around how hard it was to change corporate culture. So I don't wanna paint this picture that this has been an easy transition. There's been a lot of conversations about how you get an existing corporate culture to embrace the idea of open source collaboration because it's a change. If you're moving from a hierarchical model to something that might be more flat and more passive and collaborative, then that can be a serious change. And that can be a change of the corporate culture. It can be a change of the national culture depending on what country you're in. So it's not been an easy transition and that came out in anecdote after anecdote in our 2021 study. The formation of open source program offices is also on the rise. That's kind of not news, but it definitely showed up. People are even either looking at open source program offices to help them. They wanna either build their own or they wanna tap in the existing resources of other open source program offices. We're seeing that in all sectors. In Europe, we're definitely seeing a huge pivot by the governments in the EU to move to the open source program office model across government agencies. Okay, other things that we saw. These were things that came around the idea of building communities and what corporations wanted to do. And what we saw was really surprising. A lot of the companies we talked to were really adamant about the fact that there had to be some kind of active community involved with a project before they would even really get engaged with it. That was a huge change from 2011 when basically, again, as I said, everybody was consuming the open source and they would just take it and run it. They weren't really interested in engaging with the community at all. And if they were, they weren't really paying attention to what that community was and what they were doing. Because, as I'm sure all of you know, most open source communities are doing quite well. They're quite healthy, but there are a few out there that are not doing well for various reasons. And corporations now are paying more attention to it. They want sustainability. They don't want to engage with a community project where the community might be about to collapse for whatever reason. So now they're paying attention to what's going on. They're really looking at issues around diversity, equity, and inclusion, which for me was fantastic news. Because if they're seeing what someone would say is toxic behavior in a community, they will not engage with it. They will either outright not engage or they will take steps to try to change that toxic behavior within the community themselves. And we saw this come up repeatedly in the 2021 study. They had to build a community. If a community did not exist and that software was absolutely what they had to use, there are more than a few examples of corporations that were basically like, okay, we're just gonna build a community. Either there's a couple of people in there and it's really nascent and we're just gonna add more resources or there's no community at all. And they were gonna just like try to build a community. They really wanted to do that. And by the way, I should not say this is a community of just themselves. This is like organization why coming in and saying, okay, well, this is just all us and we're just this clubhouse. They were like, they wanna bring more organizations in. And it was really telling that the ideas around corporate diversity and having more than one organization involved in a community, kept shining through in the 2021 research finding, sorry. And highlighting the last point that people just wanted active and engaged communities if they were gonna engage with open source. Another thing that we saw was around reputations. So it's capitalism. Not everybody's doing this for purely altruistic means. So they wanted, there's still that idea where okay, we're gonna get involved in open source and we're gonna tell everybody and everybody's gonna see how cool we are. Okay, that's fine. That's the marketing side of it. You wanna build your goodwill. You wanna build your name recognition. The new thing though is that, and this is not universal obviously, the new thing was in the findings that we had, we were seeing evidence where people wanted to actually do the work. So there's a term, it's rather derogatory called open washing where an organization will just say, ooh, we've got open source. And they've just thrown it out on GitHub or GitLab or whatever and they haven't really done anything else but they've said, hey, this is open source. It's right here. But you haven't built a community around it. We're not seeing that anymore. So yes, they are taking advantage of it. They are marketing themselves. But they're actually putting in the work behind it. They also wanna get involved in communities because they have this technical landscape and now what they're doing is they see the open source ecosystem as a whole as another playing field. So it's not just the marketplace where they're going to try to compete. Now they're using the open source side of it with technical leadership and they wanna build their competitive landscape there. One example that I'm very familiar with is Red Hat's involvement with the Kubernetes community. For those of you not familiar with Kubernetes that was basically invented by Google and they gave it out to the rest of the community and a lot of other organizations are involved in the Kubernetes community. It's very, very engaged. Red Hat, because of our model where we wanna demonstrate to customers that we are very knowledgeable in Kubernetes, we are also very actively engaged in that and that is sort of how we compete. We compete through cooperation. We wanna be really engaged with it so we can go back to our customers and say, yeah, we know Kubernetes because we're doing a lot. We're not gonna be dumb and say we're number one because metrics can be gained all the time. I can think of an example where a few years ago we said something about we were the number one competitor in OpenStack, yeah, and then like two months later, HP came along and they did a ton of contributions and so they were number one. So our marketing department kind of learned, maybe don't do that. And just say things that are a little bit true but sort of like, yeah, we're really doing well in OpenStack. So those are new findings that came out in 2021. Excuse me. The other thing, this is where I got really maniacally happy about it for reasons that probably aren't good where we saw the alignment of communities with business objectives. So first one, not so new, standardization around ecosystems, people get involved because they wanna make that particular software a general standard if not in actual name then in purpose. They're also getting engaged with open source communities because they see something new being built and they wanna get their hands around it and try to get a jump on their competitors. So I remember when Docker came out and I was actually at a conference in the Czech Republic and I'd never heard of Docker. What are we talking about? What's happening? And then somebody said containers and I'm like, wait, like software jails? What's happening here? Because there is nothing new. But once it kind of became clear that containers were the thing, it was almost like this game of what if. Like what if company A had figured out how to use containers for this a little bit sooner than Docker did. And use that to their advantage. And a lot of companies are looking at that now. They're kind of looking for the new Docker as an analogy. And also this part's not as new but it's definitely stronger than it was in 2011. Companies and organizations are just basically trying to participate so they can steer the direction of the project towards their existing business interests. And that's fair. If you have a road map, you would really like Project A to sort of kind of match your road map. So you're gonna get involved in that. It doesn't always work. There are other organizations involved. There are other people involved. And like any other democracy, it's messy and can kind of roll along like a tangled ball down the road. But they are trying. We're seeing that more and more. And then the really evil stuff. And when the researchers presented this data to me, I got really excited. Like they were like, you need to calm down. Okay, but I was. Okay, so let me stipulate this right now. Capitalism is not my favorite thing. So let's put it that way. But I understand capitalism. So I was looking at this and things like they're getting involved in this in kind of negative or aggressive ways where they're like, okay, we're gonna get involved in this community and we're gonna basically release our technical debt. And let other people who are competitors who are participating in the same community take over and kind of take over the maintenance or whatever technical debt or innovation or whatever is going on in that community. This is why I got excited. I thought that is evil, but it's also kind of cool because it's like, think about this. When open source first started, nobody wanted to touch it. Like I remember in 2001 because old that people were like running around going, oh my God, there's open source in your server room. Oh no, what do we do? And meanwhile, there's like 50 Linux servers running around in their server room that nobody on and the CEO level knew about. That's over and over that kept happening. And now we're here today where it's like, oh no, we're just gonna use open source and just do all these sneaky things with it. Weaponizing licensing, this one I'm actually not excited about because I don't like licensing being misused like that, but it is happening. You influence a competitor. If you start jumping in and putting all your ads in the basket, a project, whatever, your competitors are gonna get really interested in that too. And you're gonna maybe negatively affect them and they're gonna try to get in there and see what's what. And that is another aggressive use of community around business goals. Also, you're gonna maybe commoditize the competitor's service so you can kind of make sure that there's no vendor lock in. If you open source what your competitor's trying to do or take what they've done and already open sourced it, if they've done that and you jump in and you kind of keep it spread out so that your competitor doesn't have any, the sharpest edge around that. So you minimize their influence on their project by jumping in and becoming part of it too. Again, this is all sneaky and clever stuff, but I like this because it demonstrates to me that people have enough competence and open source now where they will actually try to do stuff like this. Am I excited about each one of these individual things? No, I'm a parent of three wonderful daughters and the idea that they're all going to do what you wanna do all the time and be perfect little angels? No, and I can't do that with corporations either, okay? Because they're always gonna try stuff like this, but the fact that they're doing it with open source still gives me a little bit of excitement because it's like, okay, we've gotten to this place and we move forward. So generally the takeaways. So those were the new findings. So what do we kind of come away with this from the whole thing? Okay, in the 2021 study, as I mentioned before, there has been either less separation or no separation between internal and external development. That is a definite trend line that came out in the study. And so that means that basically people are engaging honestly with open source communities. So they're not just necessarily taking their code and throwing it over the wall, which is the metaphor we use at Red Hat, when we say, okay, well, you've made it source available, but you haven't really given us a real community here. They're actually engaging with it and they're doing it in the upstream, which is what Red Hat calls the community and the metaphor that we use like the river. Also organizations, as I mentioned before, they are definitely building open source program offices. Our open source program office at Red Hat, one of the things that we're doing more and more is educating our customers and partners and even non-customers and partners. If y'all wanna come talk to us, I got a whole library of materials that are creative commons. You can take them. We actually made a book out of them called The Open Source Way and an online book. You wrote a chapter too. Yeah, all right, we got authors here, it's cool. Try not to, but anyway, that's all creative commons. The idea was we write it up in the upstream for The Open Source Way and then any organization, Red Hat or Google or SUSE or Canonical, whoever, they could take that, reuse it. It was CC by four with the attribution, so it was like, take it, use it. Just mention who wrote it and repurpose it for your own because we wanna get the word out to everybody. Look, building an open source program office isn't the easiest thing in the world, but there are definite concrete steps that you can take to move down that direction. So we're trying to push that as we go through. Building community, this, I think, was the most positive thing that came out of this talk or this study rather. It's about people and relationships. It's no longer about consuming open source code, it's about participating in the community. We're seeing that over and over again, people, as I said before, if there isn't an active and engaged community, they're gonna make one. And now it's exciting. The DEI part was even better because they are now really looking at community's behavior and some of the old guard nonsense that happens in communities is being examined with a far more critical eye, and that's great. People need to grow, people need to change, they need to evolve. Sometimes you have to poke them to get that to happen. Well, corporate involvement and people coming in and asking a lot of pointed questions is a good stick to make that happen. So they're looking at the existing DEI issues in a community, and they've cited, this came up several times, if there are poor handling of DEI issues, they will actually consider that as a reason to not participate in that project. Yeah, I'm sorry, diversity, equity and inclusion. So I apologize, yeah, I got a little jargony there. But yeah, open source, the strategic advantages, I've kind of already hit this, where it's being used constructively, you can influence existing projects, you can drive technology the way you want it to go. That's the constructive side, the aggressive sneaky side, that I take perverse pleasure in, things like commoditizing services, sharing maintenance, all those things, reflect on the general overall confidence with open source communities. So with that, I have reached the end of this. Now, I wanted to tell you, so if you pay, I didn't want to put the link in, if you want to look at my Twitter handle in 14 minutes, I put up a link to an article on opensource.com and I co-wrote with the actual people who did this study, multiple authors, and the link will be there and it says it's related to this talk. So I just timed it, I didn't know I'd be done this quickly. But yeah, so I invite you to go read that article, there are five key findings that laid out pretty much what I said here, but in a much more smart way because they wrote it, not me. So it's all good. So with that in mind, I would be happy to take questions or comments. Yes, sir. Inner source didn't, so the question was, did the concept of inner source come up in this study? It didn't come up enough to raise a flag. It was mentioned. There are a few of the organizations who were doing that and they were typically, those are the ones in the survey who are very much not doing external contributions. They were interested in the concepts of open source, but they weren't interested in participating external. So I didn't mean to give you the impression that all 35 interviewed organizations were all running out and doing external community work, but the trend had shifted very sharply in that direction. But it's funny you brought that up because as I was telling someone earlier before the talk, Red Hat did have a plan to do a study around inner source in 2022, but the funding did not come through on our end, so we didn't actually push that through. But we have some ideas because here's the thing. So I'm gonna speak with me. This is not Red Hat. Inner source is kind of weird to me. I mean, I get it. I like the idea as a concept because it's like, okay, it's training wheels for open source. But for me, being the reactionary that I am, I'm sort of like, why not just do it? Why not just go out to an external community and participate and get all the advantages of open source instead of keeping it in house? Now again, I like inner source as a basic concept. I don't, yeah, right. Exactly, and I am highly biased because I work for one of the biggest open source companies in the planet. So I concede that I've got my rose-colored glasses on there. So I get it. So I like open source as a thing. Red Hat's position on inner source is not really public. We're trying to figure it out too because we do have a lot of customers and partners that say, hey, can we practice on this? And as an open source company, we're sort of like, yeah, but just go that one little extra step. Like when you're trying to walk your kid, and you're like, okay, I'm gonna let go now and see what happens. And you don't really, yeah, I know you don't really want them to fall in your face. You're ready to grab them, but yeah, it's hard. But yeah, it didn't really come up enough, right? Yeah, and that jump, that jump is hard. I mean, we kind of, I alluded to that early in the talk when I said, this all wasn't roses and sunshine. This corporate culture thing was hard and that is an example of it. Where people are like, you know, yeah, this collaboration across teams and across departments is really cool, but I'm not so sure we want to share it with the outside world yet. For really interesting reasons. Some of it's, you know, IP related. Some of it's like, I don't feel like my code is good enough. It's kind of, the thing it is, I see you Dwayne, I'll get to you in a sec. It always reminds me of the, if you ever have hired a housekeeper, I'd like to come and help clean your house, you know, and with three kids, I've done that a few times, but you always kind of halfway clean the house before they come. I feel it's like that with the code. Like people are like, I want to share this, but I really want to take some time and clean it up before I put it out for every use. And so it's a hard mental jump. You know, so Dwayne, what's your question? Yeah, uh-huh. Well, I'll be dead, so okay. Have you seen this hairline? I don't know what's happening. So, yeah, okay. Well, yeah, we'll be here at scale 29. Hopefully not here, here, we'll see. But no, so the question was, predictions for the 2031. So I'm gonna say this, and I have to admit, so I just read a wonderful science fiction book called The Half-Built Garden by Ruth Anna Emery's, I hope I'm pronouncing her name right, The Half-Built Garden. It was a lovely, and Corey Doctorow was talking about on a sweater feed a couple days ago, and that's why I picked it up. It's a lovely book about post-climate change earth where they're trying to recover. And they use what I thought was the most final form of open source collaboration that you could possibly have. And then it's like 2083 or something like that. And they basically are using what they call a dandelion network, but the dandelion network took inputs from everyone in a pure egalitarian way. They were also using sensors for soil data, water data, wind, temperature, everything like that. And those sensors had egalitarian input into decisions as well. You couldn't make a decision without everybody looking at it and moving forward and trying to fix the earth after it had been damaged by us. And again, it's science fiction. And the cool thing, aliens show up, and that was cool too. But all right, but anyway, I would like to see that. I would like to see the open source methodology conveyed throughout our existing systems of capitalism or post-capitalism or whatever we're gonna do, where decisions are made more in a collaborative manner across everything. I wanted to leak out of open source software. Is that highly idealistic? Oh yeah. But if you ask me seriously what I wish it would be, I think that would be a way to go. Yeah, so the comment was talking about 3D printing and how there are lots of open source patterns and files around that where you can print lots of things, including now, hopefully, organs. And I think yeah, I think that's gonna be... And I did kind of put that preference because one of the things that our team at Red Hat is we know only use the word developer when we're talking about people who are participating in communities. We like to say contributor because there are other ways to contribute. I don't code. So I don't wanna tell you what I got in college for my pass out class. No, no, that's a dark secret. We don't talk about that. So I don't code, but I contribute in other ways. I do things like this. I can do system admin. I can do data analysis. I have skills, right? But I don't code. So I'm never gonna be a developer in a project. And so back to your point, I think that open source hardware, open source printing, open source, whatever. There's so many different forms of open source beyond software. That's where I hope that keeps growing. Will it really pervade throughout society? No, because people are silly. But I can hope. Ray. Yeah. Ha ha ha. Okay, I'm going back over philosophy one-on-one. So the question was, what about using open source for evil? I don't wanna, yeah, it's just a comment. Let's talk about that later, because yeah, I don't wanna invalidate what you're saying because I think it's very valid, but I also think that, unfortunately, anything. If I throw this phone hard enough, it will hurt you. Not that I want to, be cool. It's like, that falls into anything can be used. But let's talk about that later, Ray. So, oh, so what's coming out in four minutes, the link will be, will be, it's an article detailing a lot of the great detail about this study that we did in 2021. The universities of Nebraska and Missouri are working on an academic paper around that, because, and I learned that academic papers do not get done very quickly. So I was like, oh, we can't put this out there. And they were very kind. They didn't laugh right in my face, but apparently it takes a while. So it'll be coming out and you can talk to, yeah, talk to them. The question was, do I get a, can I give an example of the weaponization of licenses? Are we still recording? Okay, no, it's fine. I think SSPL, am I saying that? Yeah, that one is a good example of weaponization of licenses. If people like SSPL, I'm sorry, but it's not my favorite thing in the world. So, sorry, but yeah, I think that's an example. But it's also trying to solve a, it's a way to solve their business need, you know? So without taking a moral high ground on it, I think that is an example of it. For how it plays out, that's anybody's guess. Any other comments or questions? You guys have been awesome. Thank you so much. Just to bump up. Thanks you guys for coming to my presentation on Regulus. I'm gonna talk a little bit about myself. I'm gonna give a brief tour of the environment. Talk about comparisons with other desktop environments you may be more familiar with. And I'd like to address the question of why does something like Regulus need to exist in 2022? Can you guys hear me okay? So the title of the talk is Brand New Year Retro and I came up with that because a lot of people who aren't in this world of desktop environments, when they see Regulus, they just kind of assume that it's either something from the dark ages, like the 90s, or it's some kind of retro thing where we're trying to look old fashioned, or that it's just really simplistic. And the reasoning for why Regulus is the way it is has nothing to do with those details. It's fine if you think that, but it's really more of a modern environment for people that care more about productivity. And so like my last part of my presentation is maybe about that part, about what makes Regulus, what do I say makes Regulus more productive than something else? So for me, I write software for a living. So I spend a lot of time on the computer and there's like me doing the work and then there's like me above me looking at me doing the work. And that's kind of where some of these observations come from is observing myself doing things and trying to figure out ways of doing the same thing faster so I can get out of the office sooner. I had a Texas Instruments 4T, I99 and 4A when I was a kid. I learned how to write basic programs with that and sometimes I loaded up Geo's on a five and a fourth inch floppy, but didn't really have a whole lot used for it because it didn't really do a whole lot on an APA computer. When I, in my professional software career, most of my time in the 2000s was spent on a Macintosh, Apple computers work really well, they're good, they have a unique system in them. But then around 2016, 17, Apple kind of went in a different direction with their hardware that I didn't like personally. And so I moved back to Linux from earlier years and at that point in time I kind of searched around looking for something that I felt would be a good environment for the kind of work I do and I just couldn't find anything prepackaged. So I went down the rabbit hole of finding all these open source components like Rofi and Polybar and i3 and I just built things and played around and it really kind of opened up my mind of the possibilities of how a person can create or tune an environment to make themselves more productive or whatever they want. Maybe productive is the one they care about, maybe it's something else, but having control over it, I think is really amazing and what I found was a lot of people wanted to build things or wanted to create their own environments, but they didn't maybe have the time and how the C compiler works or they didn't have the skill set to debug, linker, library versioning problems or all the minutia of details that can pop up when you're using programs from source code. And so I felt that I had the opportunity to kind of give back to the community by providing that packaging work and kind of bringing things up into a level that you don't have to think too hard to get something that will just work out of the box. And so that's kind of how I got started in packaging things for Regolith and kind of snowballed from there. There's one other part where there's some people who think that if you know how to do things with the C compiler, you're somehow better than everybody else and that whole vibe too, I just really disliked and so I intentionally wanted to create an anti-atmosphere of that. Before I get into the talk, I wanna kind of talk about a couple of concepts or terms to make sure that I'm not misunderstood. I talk about advertisements in here and I'm not saying an advertisement is good or bad, but to me an advertisement is a piece of information that exists in your perception. It's not information that you've put there. It's some information that someone else put there. And there may be great reasons that other person put that information in your perception, but it's not there by choice. Another concept here is composition. So in this talk when I say composition, I mean the act of creating things by taking smaller whole pieces and building bigger whole pieces from them rather than say creating something as a whole as one piece. I mean, you could think of it as the difference between building a truck with Legos or building a trick with a 3D printer. In one case, you can decompose and replace the door. In the other case, you have to throw the truck away and print a new one. I'm gonna talk mostly about conservation of complexity or Tesla's law in the second part of the talk, but it's something that heavily comes in. We're right now in meat space, all interacting, and there's other places besides meat space, but the desktop environment is one software program that depends heavily on meat space, and so we kinda talk about that. And then as you're maybe wondering why I have an ashtray on here, when we say desktop environment or like file-based file system, these terms seem kind of obvious to us, but they go back to a time in the 70s and 80s when engineers and designers were trying to figure out ways that they could create computer interfaces that regular people could understand and use, and so they thought, oh, there's a desk, and there's pages, there's papers, and you move the papers around on the desk, and they kind of created these metaphors that related real-world concepts to software interface principles, and that was really powerful and it worked really well, but it's been 30, 40, 50 years since those things happened, and a lot of that stuff no longer applies, and also we've been using computers for so long now that those metaphors really don't even carry any value, I would argue, to begin with at this point in time, and so sometimes it makes sense just to let go of the old stuff and reduce complexity as a result. So you're here, desktop environment, probably not a new thing to you. In my presentation, I have a bunch of little screenshot things, like if I'm droning on and on and you're like, yeah, yeah, get to the next slide, you can just look at the screenshot and wonder what it would have been like if you were using a particular desktop environment, so this is the Amiga. There's kind of different perceptions on what a desktop environment shouldn't encapsulate. Some people think that it has a whole software suite that includes everything you would ever want to use, other people feel that it's just a terminal cursor and you can do everything from there, and so there's a huge variety of kinds of desktop environments and how big or small they are. For me, I think what is desktop environment really need to do to solve my problems as a professional developer or technical user, I'd rather say, and it comes down to two things, management of applications and management of the computer hardware. So I need to launch, exit, copy data between applications and when I come to a conference and I want to present something, I need to plug in the video cable to the projector and I should be able to configure it so that I can see the screen. It's a good thing that I started a little earlier because that was actually a lot harder than I thought it was going to be. I did unplug it a couple of times. I happen to be running Debian testing and an unreleased version of regular, so there's a lot of shaky risk going on right now, so I have an older computer that I can pull out if I need to, but these are essentially the two things I feel that a desktop environment needs. I personally don't want my themed email client or a chat client or any of that stuff. I can find what I feel is the best application for the specific needs outside of my desktop environment. I feel that's kind of like a lowest common denominator problem where your desktop environment kind of has all the speed of applications. You kind of feel like if you use something else you lose out on the integration with the desktop, but if you use it maybe there's features or workflows that you miss. So it's, in my opinion, better just to let those other things exist as they are. Now let me kind of give a demo of the desktop. I didn't really plan to not have two hands for this, but let's give it a go. So this is it. You may be understanding now why people think of this as old fashioned, possibly. Regolith, for the most part, is just a bum-dilling or packaging of existing open source components. So what you're seeing on the screen is a combination of GNOME flashback, which is a project that's kind of maintained the GNOME 2 API style into the GNOME 3 world, but doesn't have a lot of the extra features and stuff of GNOME shell. For the window management we use I3 or I3 gaps, and the bar at the bottom is I3 bar. These are all components that, to some degree, can be replaced with other components and changed out as needed. Parking me back to that metaphor about the LEGOs versus the three different entrants. So with I3 and other tiling window managers, the idea is that you're not feeling with pages on a desk, meaning like, imagine you're sitting at desk and there's all these pieces of paper on your desk and you're moving them around and arranging them and putting them on top of each other and stuff. You don't do any of that by default. You just, I can't do it with a little man. Basically you can just, sorry? I need the mouse, ah, irony. Yeah, this one, the last speaker wasn't working for them, so I'm assuming there's something wrong with it. In any case, I just hit super enter and that loaded a terminal. I just loaded a program called screen key that will show my keystrokes. So if I hit super enter again, I get another terminal and you can notice that each terminal application automatically consumes as much resources as it can rather than being stuck in the middle that I then move around, simple thing, but over time the behavior of which the window manager puts the window in a place where you can see all the contents of the application ends up saving you a lot of time as a person not having to, on your desk, move the pages around the desk to get to the info you're looking for. So you can continue to kind of do this and open new windows as needed. They can nest, they can go to the side. You can run applications and then you can move, you can move them around. Oh, no, it's okay, but thank you. So I can move like this and I can kind of, with hotkeys, essentially take windows from any location on the screen, put them in other places, make them bigger, smaller. I can make a window float, so I've just gone to the floating mode and then this is more like the traditional style that you're all, everyone's comfortable and familiar with where you can move things around with the mouse and then there's another mode where you can just make one given window full screen and there's like other things, like if you have multiple monitors, it's easy to treat a set of windows as a group that you can directly just move with a hotkey to different monitors and kind of once you get into it, it takes maybe a couple hours of learning and once it's committed to muscle memory, then it kind of just goes away out of your perception and it's just something that happens and like the part of your mind that you're aware of is free to just focus on the problem that you're trying to solve instead of the pages on the desk again. So that is window management. I can only screen key open. So you guys can see that. What's going on? The next thing is the launcher. So that's kind of like the window management aspect of Regolith. The next part is when you as a human wanna cause the computer to do something new and that what I would consider a desktop executor or an application launcher, it's commonly referred to. So in Regolith, in 2.0 we're using this program called Ilya and it's like any other launcher. You can move around and find things you wanna launch. You can start typing text and it filters the box and you hit enter and the application launches. There's a lot of those out there. This one is custom for Regolith in 1.x. We use Rofi, which is a really awesome tool. It's like you can do so many things, but because it's not a GTK application, it required a lot more work to kind of maintain it consistent behavior to the rest of the system. And so for 2.0 I opted to implement a custom program. In addition to the application launching, there's a set of other pages. So there's different key binding commands that bring this dialogue up. So another one is to launch commands. So if anything that's on your path, you could launch here. There's a notification viewer. So if you have applications that have notifications, then this kind of gets into what an advertisement is, as I mentioned earlier. In a lot of systems, when you get a notification, it sticks something on the screen so that there's a notification there. It seems pretty obvious that that would be a good thing to do in my opinion. I don't care what that thing is unless I'm on fire or something, in that case, I don't need a notification, but I don't want to be interrupted from what I'm doing. I don't want anything popping up on the screen when I'm trying to think about something. And so in Regolith, there's a little, can't really tell, but there's a little thing at the bottom, this guy right here, that tells you how many notifications you have and that's it, there's no pop-ups. So you bring this dialogue up when you want to deal with your notifications. I think it's similar to the phone. Next up is a keybinding viewer. So these are all the keybindings that you can use to manage the system. Again, you can filter at the top. And this is when you're getting started a good way to just figure out, okay, how do I shut down? How do I exit? And so all the keybindings are here. And the cool thing about this is that this list is not static. It's being, there's an IPC call that's going to the window manager that's saying, what are all the keybindings? And it's returning that data and that's being viewed here. And so if you make changes to your configuration, they're automatically reflected in this dialogue. This is showing you what the window manager is actually listening for. And some of them actually work and as in you can execute them from this dialogue. The next tab is windows. So all these tabs are also directly invocable as they are. I'm kind of going through them one by one because it just helps me talk about them. But when you're using them in a real context, you're not doing that, you're just pulling up the window list. And so if you're looking for whatever window you want to load or switch to, that's where this is. And the last one, the last one is files. So I can type in some files and go to one and it opens in the application. So that's essentially it. There's those different kind of modes for causing the computer to do something. There aren't any menus here. There's no pull down menu. There's no dock. There's nothing that would let you do that other than that dialogue. And that's a design choice. And I'll get into that a little bit more. I guess another thing that's important to consider, and this is kind of like, when I began kind of exploring this world, I set up I3 and I was kind of happy, but then at work I went to present something and I realized that I had to use the command line program to configure my resolution. And I couldn't remember the commands and I was going through the man pages and trying to figure out what I was doing and I realized I didn't want to have to deal with that. And so the GNOME has already kind of solved this problem with the UI and there's other things like this. But Regolith basically takes the GNOME control center and we forked it and removed all the GNOME shell stuff out of it. So this is really just a subset of GNOME control center. It doesn't have those features that don't make any sense on an I3 based system. The last thing is the look. So what you're seeing here is one of the different looks available in Regolith. A look is like a combination of fonts, colors, GTK themes and other things that all kind of come together to give you like a consistent look and feel. I can show the looks. So I'm going to switch to like Neville, which is a light theme. And so it's swapped out to the GTK theme, updated the fonts, the changed bar icons. It does all this stuff so that you can easily kind of just make, switch your desktop to whatever you may like. Under the hood, this is running on a part of X11 called XResources, which is a key value store which allows you to just kind of read and write key values from like shell scripts or binary programs. So all the definitions for the looks are specified in XResources. And then any program that may load that wants to know what the font is can either do a shell command or through a direct C library call. Talk to the back end to get that data out. So in some cases like the bars on the top, some cases the bars on the bottom, the look system is very flexible. I could pull in a whole different bar implementation if I wanted. For example, if I preferred poly bar to I3 bar, I could make that part of a certain look. So yeah, so that's essentially the tour of the front part of the regular desktop. Let me switch back. Okay, so that is the front. And then like under the hood, what's going on to make regular if the desktop environment is that there's an X session. It's part of X11. That X session loads I3 and GNOME session. This is all based on the I3 GNOME flashback project that I found as part of my journey. And that's kind of what started me on what I built. It's kind of like the Voyager to the Vger, if you will. We'll start track reference. And it's just, that was kind of like the beginning and I just kept building on top of it. And it's pretty crazy now, but in any case, I just mentioned the X resource loading. So this kind of serves as a global key value pair system that lets me kind of normalize like theme data across any application. It doesn't have to be a GTK app, for example. And then what happens next after the X-ray sisters are loaded into memory, the I3 config is loaded. I3 is the window manager. We've had some challenges in Regolith because of the flexibility of the window manager itself. What I mean is a lot of these tiling window managers, a lot of their power comes from the fact that they can be programmed by the user. You can change the behavior, like what key bindings are used or what's the default behavior of how windows are rendered, how they're laid out, all sorts of things. And those are expressed in configuration files. People like to make their own configuration files. And the issue with that is that as a system packageer, when I release version one, I have a configuration file. And then when I release version 1.1, I've made changes to that configuration file. But the problem is that people have already made changes to the version 1.0. And in order to get the new stuff, they have to go back and do a diff and figure out what's changed and pull things in. It's very tedious. Nobody likes doing that. It's gonna be error prone. You kinda have to know more than you should have to know. And so what's happened recently is I3 has a config partial. So it's like the confd idea where instead of having a single file, you have a lot of different configuration partials that essentially helps this problem by focusing on a specific part of the configuration you care about. So maybe you only wanna switch the super key and the alt key. You can just modify one file, put that in your user directory. And the rest of the stuff maintains stock so that when you get a new version of the project, the things that you didn't change upgrade seamlessly. That's just came in in 2.0. It's yet to be seen if that's actually gonna really solve this problem or not. Hopefully it will. Another aspect of Regolith is that so it's a very small project. We try to keep everything as small as possible. So there's no like configuration tool for the desktop. Essentially that boils down to packages. So when I showed kind of the desktop here, these, all of these indicators at the bottom, those are different Debian packages. And they're installed and removed with the confd style. Meaning that when I install say the time package that puts a script in a directory and then a program goes and sees, oh, there's another script in this directory, I'll run it. And so essentially the way you configure Regolith is by adding and removing packages from your system. The value there is you're not running things that you don't use. And there's no extra front end or extra configuration tool that you would need to use the system. That ties directly into like the concept of modularity or composition. Meaning that each piece is separate and independent and it's possible at least to remove one piece of the system and replace it with another. It's a key architectural feature of Regolith. It's not a byproduct. And I guess it's a belief that these systems that are composable are better than systems that are monoliths for this type of application. Other info about Regolith is that you can consume it or get it as from a repo. So you can add a repository to your system and install packages. And then you have it along with your other desktop environments or you can download a Ubuntu-based ISO. It uses a live Ubuntu from scratch project to generate this ISO. It's based on Jami. It uses a LightDM window manager and a few other things that are different from stock Ubuntu but under the hood it's still Ubuntu. The SCLSB file says it's Ubuntu. So all the packages, everything is coming from Ubuntu. There's no variation that as a user would be like, oh, this is not Ubuntu, it's something else. And that was also by design. So Ubuntu is a very popular distro. Almost all the tools that a developer would use work or design for Ubuntu. And I think that is a great feature and I wouldn't want to deviate from that. I wouldn't want to cause people any problems. And so under the hood it looks and feels and is Ubuntu. You can run Regolith on Debian Bullseye. I'm working on testing now. This is testing. It's got some issues. It's not ready yet. I'm at the community port for Arch. Some people have been talking about Gen2 ports. I don't know where that's at. And then we have one package that has been done in Fedora. I need to pull that work in into the build system and eventually I think we'll have support for Fedora as well. I had to build the build system. I love Canonical and I love all the awesome tools they have. Like Launchpad. But I think for a project of the scale of Regolith I've kind of exceeded what is really reasonable to do with that system. So I essentially moved out of Launchpad as part of the 2.0 release and naively walked down this path of my own build system and I'm on version four right now. It's a lot of bash scripting and duct tape and hope and prayers. It usually works. The only time it seems to fail is when I'm about to do a release somehow. I don't know why that works that way, but based on GitHub actions and workflows it calls into shell scripts. Under the hood for Debian packages is this re-repro I think it's pronounced that generates the Debian package metadata and then out just the Debian packaging tools. Currently Regolith is an X11 based desktop environment. Work is underway right now. There's a Google Summer of Code contributor who's working on porting Regolith to Wayland compositor called Sway. He's making crazy fast progress. It's really exciting to see his work. So based on that I think sooner than later we'll probably have at least an alpha of something to show running under Wayland instead of X11. And if I can classify the project of Regolith I would say that there's different kinds of open source projects. There's things like MongoDB where it's a commercial company that has its own database offering and there's different kind of projects that have different kinds of angles or approaches. And I would classify Regolith as a project that is not providing something that is new or unique. It's really just managing some complexity out there that is not really hard to solve but it's just time intensive and tedious. And so anybody can go and try to figure out what are the right dependencies you need to build PolyBar and where do you install it? Or you can just install the package. Either way, it's not like Regolith is providing some new special thing that doesn't exist. It's really just a matter of convenience and consistency. So I was gonna show some screenshots of other desktop environments and how they look and compared to Regolith. But it turns out they all look really nice and that was bad for me because I didn't want to show that. It's my talk so do my thing. And so, you know, like everybody kind of knows like what Windows and Mac and Ginoam and KDE look like. The difference is I think pretty obvious. There's no docs, there's no file menus, nothing jiggles, nothing bounces. There's no like indicators telling you to go do something. There's no pop-ups telling you to buy a product or service or sign into your Microsoft account or whatever it happens to be. And that's by design. If I could, I would get rid of the bar at the bottom. I've tried and you kind of just need to have some spatial information about where you are. You can turn it off with super eye if you want and I do that sometimes that I'm really focusing but ideally there would be nothing static on the screen at all and hopefully someday we can get there. And there's a reason for that more than just being a minimalist. So I think the area where Regolith does kind of fall behind objectively is like shine and polish. There's a lot of kind of glaring, kind of chunky things happening. Like when it first starts up, it just throws itself on the screen. There's no warm up and when you kind of switching around things, it can be jarring. Some of the dialogues aren't quite clean and you know, it's here and there. There's just like little things that are not quite perfect. So there is definitely kind of I think a regression over all the beauty and the consistency of things like Ginoam or the Mac. I think that's a function of just the amount of people working on these systems more than anything. Okay, so I have 30 minutes, that's Regolith and I want to talk about why this matters or why I think it matters. These are kind of like the five or six five points that I think are all kind of something to consider. The first one is the one I'm going to talk about for the next 30 minutes and then these other points, I think they're interesting to say but I haven't really prepared any content on them. So I think if I'm going to skip the first one because that's what I'm going to talk about for the rest of the session. You could think about like there was a time in the past when the PC was going to be the centerpiece of your like digital world and you had the PC and everything was connecting into it like your VCR and your TV and your blender and all like there was all this stuff where you were just like everything you had managed from your personal computer and I think the desktop environment, they kind of evolved to try to fill a lot of needs and roles for like this bigger world of what a personal computer is and then at some point like digital cameras got good and then the iPhone came out and you know and now our personal lives are really more on our sitting in our phones like in our pockets or whatever. They're not in our desktop PCs and some people still use their desktop PCs to do all that stuff but I don't and some people that don't do that may prefer to push that complexity into their phones and take it out of their desktop PCs for gains. And so I think that we're living in a world now where there are many devices with many interfaces, many screens and I think it's a mistake to try to take one experience, one UI and try to spread it across the whole spectrum of things you would use. Not because it's maybe not because it's a bad idea but because it takes a lot of work. It's really hard to do that. There's a lot of trade-offs and how do you get it right for everybody? Very difficult problem. So if you just simplify everything if you just say the desktop PC that's for a certain kind of work and all the things that are tangential to that go away and that's one of the trade-offs that Regolith makes. So if you plug in your phone to your piece into your Regolith desktop it's probably not gonna do anything or maybe it'll show you like file stored if the USB stuff is set up right. So that's kind of like the fragmentation thing. The next idea is that software has to be designed by somebody and a desktop environment is typically targeted to a set of users. And often when you wanna capture the most users you wanna think about like what's the most basic user I need to address and then make sure that all your features in your system works for them and then maybe you build extra things on top for the other users that are maybe more capable than that. The problem with this is that when you make that kind of trade-off of making things as simple as possible for people that don't really know how to use a computer you end up with things like trash cans which sound great but that's just another advertisement on your screen that's taking up space that I will get into could be used better for other things. And so I kind of raised the bar for Regolith and say that the lowest common denominator of person that I think would be good using the system would be somebody who has some level of technical literacy that they understand what deleting a file means. They understand how software generally works and they're not really afraid of using a system that doesn't conform to like these 20th century ideas of what a computer does. And so if you're like that then Regolith might be a system that you're interested in. And lastly there's this notion of art and identity. There's a Reddit group called Unix porn that has essentially what I would classify as art of people who take their desktop environments or their systems maybe not even environments and they create some kind of beautiful color schemes and wallpapers and widgets and in some cases it's just really amazing and it's kind of just a completely different kind of notion of what people use to express themselves. Like these people are expressing themselves by the way their desktops look and that sounds maybe silly but if you go check it out I promise you it will be an interesting thing that you won't regret it. And so with Regolith, it's also about this idea of identity and that that identity should be something that you're able to control in an easy way and not in like the rails of what a given system gives you but rather again going back to the Lego model components you can totally rip out and put something else in its place as needed. Okay so now all of those points I didn't talk about Tesla's law so now I'm going into Tesla's law. This is not a law even though it's called Tesla's law it's an adage which I put the definition of an adage at the bottom. I do not want to be schooled in information theory in this session by anybody so I'm going to walk a kind of a sketchy line between like things that seem like they're science and my opinion. If it sounds like I'm talking about some new science I'm not, it's just my opinion man so don't raise your hand and lecture me on information theory. I'm sure you're right. But in essence what Tesla's law is talking about here is that there's a baseline level of complexity in a given system. It's kind of like entropy and there's always in a lot of times in enterprise talks there's always things like we're going to take the complexity of the system for you you pay us money and we give you this thing and sparkles, fairy sparkles happen and all of a sudden your software is so much simpler. That's kind of like what they're talking about that's the notion of complexity and there's people who think that you can just go into a software system and pull the complexity out and just put it in the way spin and you're done, you've solved the problem. And what Tesla's law says is that that isn't possible there's an inherent amount of baseline complexity in any software system and if you simplify that software that just pushes the complexity into the user. So if you had a complex piece of software that solved the problem in a good way or in a complex way and you say no, no, no let's make that simple that just means that the user now has to deal with that complexity. And maybe that's fine, maybe that's the best way. The law isn't saying that the complexities should be in the computer or the complexity should be in the person's mind it's just saying that that complexity doesn't go away, it's conserved. And this idea that we have in the software industry that complexity is just like a Lego again that you can just pull off of the system it doesn't exist. Now there is always the case of when a program is doing something buggy like it's doing too much work or you realize that there's a simple way of solving a problem. Of course you can reduce complexity in cases where there was access complexity to begin with but there is this baseline level and because this is an adage there's no proof here I can't guarantee you this is true but it's how I feel it is true. And how this plays out in a desktop environment this is kind of what we're just talking about is that if you're looking at an application and a person is reasoning about it you can scale how the experience of this software application as it relates to the user you can put more complexity in the application you can spread the complexity out or you can put more of the complexity in the user's mind but it's the same constant amount of complexity in any architecture. And what I'm gonna talk about next is why it matters, why these scribbly balls may be better on one side than the other. So in the definition, like the last sentence it says instead it must be dealt with that it's talking about complexity either in product development or user action. Another thing I wanted to say is that while this is cast towards GUI applications I think that this applies to a lot of software domains that have nothing to do with GUI applications and to me, someone who's been running software for 20 years the fact that this rarely ever comes up is weird. So as I said before it's very common in enterprise software conferences for someone to be standing in front of you saying and we're gonna do all this hard work so you can focus on your business logic. That's Tesla's law and play. You can see kind of like how different kinds of complexity may be structured. We have the notions of web services versus monoliths. There's not really a reason why one is better than the other except that as humans when we deal with smaller pieces of complexity it's easier for us to reason about them versus larger pieces of complexity. To me that's the biggest thing. And so if you can make a system easier to reason about that's better. There are sometimes costs, there's always costs to that trade off and really it's about what is the sweet spot in terms of which end of the scale that you create or you architect your application or your environment. So in the case of regolith we end up wanting to put more complexity in the user's mind. Rather we wanna make the complexity even between the user's mind and the computer to optimize for time. And what I mean by that is and if you're wondering, I'm the illustrator here. Okay, if you want to know who's fantastic work this is. But so in this diagram here when I talk about these loops there's three different feedback loops going on when somebody is sitting in front of a computer. On one side in their mind they have a model of the problem they're trying to solve and they have even a model of what's on the screen in their mind. And they're thinking about it. On the other side there's a computer and it's running the program. And then in the middle there's another feedback loop where the person is looking at the screen they're seeing a window and they're like, oh right, I gotta do that. And then once their mind figures out what they need to do next then there's this process of moving the appendage and then there's a piece of plastic on the desk and they grab that thing and they start pushing it and then what happens then is there's like a cursor on the screen that mirrors the movement of that piece of plastic and then they're able to use their appendage to move that cursor to a button that they wanna press in order to get to the next step in their workflow. This, you know, these are milliseconds, nanoseconds. This is going on very fast. The human mind works very fast. This process is taking at best hundreds of milliseconds probably tens of seconds. And so this is very slow. So if we can think about like how you would compare the efficiency of different desktop environments one way you might measure that is for a given static task how long does it take to get it done in one desktop environment versus the other and the one that takes less time is more efficient. And so if that's the case one way you can optimize the system is by taking out as much as possible from this slow loop and pushing that into these two fast loops. And by doing that, you let people work faster. So that's one aspect of it. And so there's some obvious things that like when you're using the keyboard to manage your windows there's less kind of physical movement happening. So you're maybe spending like 200 milliseconds or a second instead of two seconds or 10 seconds. And again for people that maybe use a computer once a week probably doesn't matter. But if you spend eight hours a day or more on the computer that time adds up. And so the more of the kind of workflow of your application you can embed in your mind so that you don't have to do this physical dance every time the faster you'll be. And it only really takes maybe a couple hours or a day or two to get used to this. So that's one aspect of minimizing this slow loop feedback loop for interaction with the computer. And the other thing is this notion of what I call application fidelity. So you can imagine two like desktop environments here. You have one that has the bars and the menus and the docks and the pop ups and all the stuff on the screen. And then there's a spot in the middle for the application. And then there's maybe some other desktop environment where there's none of the stuff exists and it's just the application takes up almost the whole screen. So maybe this one has maybe 10 or 20% more space in your pixels. And the reason why this matters is in this loop when the human is reasoning about what's going on on the screen, the more pixels, the more resources that application has to put information into the user's head, the less of the slow loop feedback loop may have to occur. Like an example, a trivial example of this would be if you're reading, say, a page on a screen and maybe it's going past. Maybe you have a really tiny screen or something's wrong and you have to scroll and move over for each line. Imagine how much slower that is compared to if you can just read the entire document at once without having to do anything with the screen. So the fidelity of the application relates to the efficiency of the desktop environment. So this gets to the trade off. Some people, they prefer the ability to have help on the screen, meaning they can visually discover the features of the program or the environment by just seeing what's there and clicking buttons. I think a good example of this is context menus on a desktop. You can just right click and then you get some menus and you keep going in. And you can explore what features are available just by visually looking at these menus. You can compare that to a terminal application where you're just looking at a blank cursor. You have no idea what the potential is there. These are two different modes of operation. And they both have different pros and cons. And for someone who might use Regolith, because that technical buyer is higher and we push more of that discovery into the mind and less on the screen, as a result, over time, you end up being just more productive. But the cost is that you have to know what's going on. You have to capture that information and keep it in your mind. And it's not for everybody, and I'm not saying it should be. But that is essentially, I guess, the reasoning why I think that something like Regolith is not just a retro experience from the 90s or is something that is designed for maybe old hardware. But it's really more of a rethinking of what the desktop environment should be for people that use the computers all the time. And what I really mean with advertisements is that, and I'm not picking on any particular environment or distro, but when you load a desktop environment, it's going to have some icons that's true for Windows, Mac, or whatever. And a lot of those icons you don't even really care about. You'll never use the buy from Amazon icon on an older version of Ubuntu, or you'll never probably use the file finder application in whatever. But they're statically there on the screen forever. And so that's just something that's just wasting space on your screen. And of course, you can go full screen mode. You can go and tweak the settings. Most people never do that. And so sure, and I'm not saying it's not possible to get other desktop environments tweaked so they consume less screen real estate. But when you design a system by default that understands the value of keeping the screen for the application, it just ends up being over time a more productive experience in my mind. And so this is kind of just a summary of what I just said. But I think when people think about complexity, it seems like an obvious thing that let's make the computer do all the work. Like we should let's make it as simple as possible for people. And in definitely a lot of cases, that's really valuable. Like if I go to the DMV, I would love to just be able to go to a screen and not have to think too much about what's going on and just push a couple of buttons. And all of a sudden, my license has been renewed and I can leave. I don't want to know the minutia of what's going on inside the DMV and all the rules and whatever. So I see value in that. But there are other use cases where as a technical user of a system, I want it to be to scale to my understanding of how that environment works. And I want to do that because I get productive value out of it, not because of some notion of minimalism or more of aesthetic, which is also valid, but simply because I might be able to leave at 4.30 instead of 5 o'clock. And to me, that's worth a lot. And again, I want to make sure that I'm clear here. I'm not saying that Regless is more productive or efficient. And I'm not saying tiling minimum managers are more efficient than stacking window managers. Everybody is different. Everybody brings their own experiences and biases to their experience. I have found this to be true for me. And it may be possible that it's true for you as well. That's about it. I've had, for the presentation, you can find out more at ReglessDesktop.com. I've got some stickers and stuff over here. You're welcome to grab as many as you like. And if there are any questions or accusations or whatever, I'm happy to field those. Yeah, thank you. The question was, does Regless have a session saver, meaning if I log out and log back in, all my windows are back to the way they were? The answer is no. There's stuff built into GNOME Session to do that. It doesn't work in Regless. I don't know why. I haven't really gotten into that. I've kind of forgotten about that. For me, typically, my desktop runs for weeks at a time. And it just doesn't happen. But I do see the value of being able to have that kind of session information persist across logins. But at this time, it doesn't. It may be something that happens when we move to Sway. I'm not sure. Yeah, the question is, for users that are familiar with more traditional desktop environments like GNOME 2 or MATE, if there were to get into Regless, how would you recommend making that an easy learning curve, easy experience to get going from that kind of background? This is a really good question. And I wish I had a good answer for that. Kind of a core tenant of Regless, as I said in the beginning, was this inclusivity, meaning that I would love to erase as much as possible this notion that the more you know about your compiler or your kernel, the better you are. And so I want this to be more approachable to a larger group of people. One of the things that is done in Regless to kind of do that is the keyboard viewer, which is that. And so this is essentially kind of like the visual discovery of how what you can do in the system. I guess it's kind of akin to that context menu idea. So when you log into Regless session for the first time, this comes up by default automatically. So I guess it's really tough to give some kind of easy thing because when you're used to just that visual exploration of seeing the thing on the screen and clicking it and getting the action, there's just really nothing like that here. And so the thing about it is you can do whatever you want. It's not gonna break anything. That's another part of I think being a technically literate computer user is you don't have that fear that oh, if you touch the wrong button, it's something bad's gonna happen. And my grandmother, I would come to her house and I was a kid and she had a remote control and she wouldn't let me touch any of the buttons on the remote control because she was afraid that I would hit something that would change something that she couldn't get back. And so she had to run the remote control for me, which is really lame, but it's this idea that you should feel free to make, to experiment with your computer and to play around. So I think step one is just download the ISO and boot it from the USB drive so you don't ever worry about mistakenly doing something to your system to play around with it and then just looking at, and then going through the key bindings. The thing about it is that like you can kind of start small, meaning that you just learn how to open windows and moving them around and you can kind of just get, you can get a lot done with just having that and then a little later you wanna learn some more nuances of how you can move windows across workspaces or workspaces across screens, but it's really a simplified experience. So there's like, there's actually less to know than in those older like desktop environments that have more features and more functionality. So I think the biggest thing is just resetting your expectations and just realizing that the desktop environment can be very simple. Sure. Yeah. The question is, is there like a baseline, but in the notion of like advertisements and baseline functionality, is there any kind of set of advertisements that are kind of essential that you can't take away? And so before I kind of was talking about how I wanted to have, ideally like no bar at the bottom and the application has a hundred percent of the screen resources by default and thereby having like all the application fidelity possible. And for a while I was like, I was on it. I was like, yes, we can, I can do this and I removed the bar and two things that I had to have were this notion of space. If there's something about seeing this all the time, it just helps you mentally like know where you're at, I guess on the map. And this and then the time. I needed those two things and I couldn't get rid of them. I'm still thinking that maybe there's a way. So one of the problems is that, you know, like because of factories or whatever, like our screens are like long and skinny. And so we have this problem where, because in the old days of CRTs there wasn't a case where we have these UI metaphors that are really greedy with the space and look at all this, just, you know, just terrible. Look at all that, there's nothing going on there. And in defense, when you have a, of regular, when you have a window open, this is where the title bar goes. So it's not always wasted, but to answer your question, it's really to me spatiality and I think that there's different ways of doing this that may, I would like to get this done to three pixels if possible. So spatiality and time. And I think time can also be done in some other way. The rest of this stuff, I put it here because I'm testing this system on Debian testing, but I'll probably remove them later. But I think the other thing is knowing where to start. Like you have to, when you have the visual discovery, you have the activity menu, you have the sidebar of the icons, you click on the browser and you get started. So the screen is telling you what you can do. And here, that doesn't exist. Nothing is telling you what you can do. And so another thing I was thinking was maybe the first time you log in, there's some kind of special UI that kind of showed, like mobile apps kind of have this where it's like an overlay where it kind of shows you, you know, here's where you do this and here's where you do that at some kind of interface, but I just haven't gotten there yet. Yeah, the question is, do I feel like I'm the only one or this is kind of a rare thing to want to have a system which doesn't have like a bunch of stuff on the screen and to have it paired down so the application can take up more space? I don't, I think a lot of designers probably would prefer this mode. The issue is, again, it's addressing the lowest common denominator. When you're dealing with designing a system for people that have no information about the system. So there's all these, there's this common pattern in Linux about like distros that are for the new users or distros that are for the Windows users or for the Mac users or whatever, there's always, and I'm not saying this is wrong, I think this is fantastic, that there's a distro that's designed for people that use Windows and it looks like Windows, that's kind of the thing that I think a lot of distops, like they would probably like to go with something that has less visual stuff, but when it comes to like, oh, well that means you're gonna have by default, you know, one million users instead of 10 million users, that the user count is more important than the visualness and also like I'm pretty sure that all of these environments that are not regular can look like this too. You just have to configure it, you gotta go into the settings and you turn off the dock and you turn off the bars and you can hide the menus and so I think in most cases you can get rid of almost everything to some degree, but because it's kind of by default as on, that's a feature and the feature is getting you more users that have less technical capacity or aptitude. Does that answer your question? I think I didn't answer your question. Are there other options that go straight to this philosophy being? Mm-hmm. When, like I mentioned before about this Reddit channel called Unix porn, if you look at those screenshots of desktops, they're almost not true actually, maybe half of them are super minimal like that where they have very little on the screen and there is this minimal aesthetic. Yes. Well, yeah, and that's more about the why. Yes, that's true. And I see, so I don't, I'm not claiming that I came up with this idea, but I don't know of anybody else that has this idea. I don't think any of this is particularly new or anything like that. I just think that it's observations over time and user interfaces are really hard to design in a way that appeals to a lot of people and I solve the problem by not appealing to a lot of people by appealing to people that I think that work like I work. If I was given the task of doing something like this but with like a user base of like Ubuntu, I would not do any better than what the designers of the Ubuntu desktop interface have done. Okay, it looks like we're at time. Thanks again everyone for coming. Please come up and grab some stickers and thank you again. Test, test, test. Test, test, test. Test, test, test.