 So next I want to bring in another deep thinker in the nonprofit tech space, which is Renee Black of Goodbot, who's going to talk a little bit about some of the work she's been doing around responsible tech, which, as we start talking about AI and its uncertainties, is one of the many ways we can start talking about that responsible tech. So with that, over to you Renee. So yeah, my name is Renee Black. Thanks for having me here today. I'm not going to be talking as some of the other presenters so much about the use of technology, more about some of the impacts of technology on society and a Goodbot's a new project that's just getting started off the ground. And it falls on to some work that I did several years ago with an organization that I founded called Peace Geeks. My old team at Peace Geeks worked, among other things, on projects with digital peace builders in the Middle East, working to respond to extremism and polarization. And in 2018, when Facebook changed its alboros into optimize for engagement so they could sell more ads, the impacts on the communities that we were serving were really significant and didn't take long to manifest in really challenging situations in communities and in some cases manifesting in violence outside of digital spaces. So peace building is my work and my passion for the last 10 years. But after I stepped away from Peace Geeks, I started to think about what I wanted to do next and started to think about some of these fundamental systemic issues and the roles that they're playing in posing challenges to communities around the world. But also here at home, where they tend to manifest a little bit later on, something that I really wanted to get into. I think as I started to take a step back and think about what the harms look like and try to understand it at a bigger scale, it's pretty overwhelming. There's a lot of issues around the misalignment of venture capitalist funding models. There's issues around a lack of transparency, deep fakes, disinformation, misinformation. There's automation of jobs, there's cyber stocking. And there's a lot of interconnected issues that are starting to come into more salience now. But if you're starting to try to think about where to jump in, it's pretty overwhelming. And there's a lot of different, the good news is there's a lot of different people thinking about this. But we are at this moment in time where we've got the situation where humanity has the paleolithic motions, medieval institutions and godlike technology. And we are in this moment in time where there's a bit of a crisis that we're needing to try to address. And at this moment, there's a number of different nonprofit organizations that are starting to try to fill the void of this responsible tech space. There's people like all tech is human, which are really trying to create some salience around different people who are working to try to improve technology within companies and also try to think about what the social impacts are technology. There's conveners like Access Now, which holds one of the biggest digital right conferences in the world. And there's folks like the Tech Stewardship Program that are here in Canada, which do a microcredential around trying to improve understanding around some of the values tensions as you are developing technology. And so there's budding and growing community of responsible tech practitioners that are coming online, but don't necessarily know one another. And so one of the one of the roles I'm hoping to be able to play a good bod is to try to do some convening around this work. The other thing is that as we've been as this as some of these harms have become more evident, we've seen changes in where problems are being addressed. Initially, there was a lot of work at the at the incident sort of events level, trying to understand what was happening. But increasingly, we're starting to see organizations that are getting deeper and looking at the underlying structures. Again, like things like the VC funding models that are creating some of the financial incentives for companies to take actions that they actually understand are going to be that have potential to be harm and that disincentivize them away from taking a do no harm approach. The work right now in what the folks in power prediction call the in-between times. So where we are still working very much within the institutional structures that we have traditionally used, and they're not going to serve us for the way that technology is evolving as we've been hearing from some of our presenters, it's moving at such a fast pace and with implications that are very significant, including that you're now going to be able to generate websites, generate music, generate photos, generate videos. And so the ability we are already not able to deal with misinformation and disinformation and deep fakes, this is only going to become a bigger problem. And the role impact on social trust in communities is only going to deepen if we don't come up with the right mechanisms for responding to this. So we're at this moment in time where some small networks are starting to form around how to do this, but we don't yet have the institutions that we need. And we don't yet have the collaboration we need to get there, but those two things are going to be critical going forward. So this is just an example of some of the different AI GPT, Czech GPT has been one of the ones that's been getting a lot of attention, but there's a lot of other tools that are out there right now. We've been hearing about some others today and this is going to manifest in a lot of ways across our society. And hopefully we're building towards a future that is prioritizing things like trust and strong governance and strong digital rights and transparency and accountability, but it requires intent and actual change and changing the incentive structures and not just doing the tech equivalent of greenwashing. We want to make sure that we want to commit to building strong trust-based communities and digital spaces. And that's all for me today. Thank you.