 My name is Latfi, I'm founder of Yickey, community media. We help create, make accessible to everyone to possibly create community video campaigns. So really happy to be here. I will be the host and the moderator. So I saw what they did before, so I'm gonna improvise the housekeeping. So as they say, you know, hide this done. Cut of contact, so make sure that we respect opinion on everyone and we keep it clean. We're gonna have like three incredible guests. We're gonna engage a really interesting conversation. We're gonna have a conversation with them and we're gonna keep it on the questions. The question part, we have Luisa. Luisa's gonna help you pass the card. Even though you wanna raise your hand and speak directly, we prefer the cards. If you don't know how to read and write, you can tell it to her and she can write it for you. So no worries about that. I also want to have one minute for Luisa to introduce herself because at SoCAP we have incredible volunteers and not just volunteers who do a lot of amazing things. So Luisa, if you can introduce yourself a little bit. Does this work? Okay, great. Hi, everyone, I'm Luisa Castellanos. I'm a co-founder and COO of a company called Science on Call. We're a tech support platform for restaurants. But I'm here at SoCAP this week as part of just kind of promoting women and diverse founders all across the country. I'm based in Chicago, but really happy to be here and experience this with all of you here. So I will also have these note cards and pencils. So wave at me if you have a question. You wanna write it down and I'll come grab your questions. Thank you so much, Luisa. All right, thank you all. Enjoy. So just to make sure the right session, the question is how an investor can promote technology in the public interest. We're having Rob, Katie, and Zoe with us that's gonna share. So before we start with the questions, I would like to invite our guests to introduce themselves in one simple minute on who they are so they can, you can grab the talent that's behind hearing the wisdom. Sure, I'll start since I'm right here. I'm Katie Knight. I'm the president and executive director of a private foundation called Segal Family Endowment. We are focused on the impact of technology on society, both the positive potential of technology but also sort of risk and harm mitigation of pervasive technology. So we work across learning workforce and infrastructure to try to operationalize that mission. And in particular care a lot about public interest technology and have been invested in and advising on sort of the public interest technology infrastructure, university network, and other components of the public interest tech world that we'll talk more about during this panel. Zoe? Hi everyone, I'm Zoe Weinberg and I'm the founder of Xante, which is an early stage venture fund that's focused on investing in agentic tech, which is technology that supports human agency through enhanced privacy, security, and digital rights more generally. We're pursuing this really against a backdrop of rising digital authoritarianism and surveillance capitalism worldwide. I think we all sort of intuitively know that at this point every move of ours is tracked or digital footprints are being mapped by both governments and corporations. We don't own our own data or even really know where it lives. And I think we've all sort of resigned ourselves to the notion that this is the sacrifice we have to make for innovation. And I think we think that needs to change and believe that the real antidote to digital authoritarianism is individual rights and human agency and empowerment over your data, your assets, your information. So that's what we're up to. Thank you. Good afternoon. First of all, thanks for being here. Late in the day with us, my name is Gaurab. I'm the executive director of an organization called Responsible Innovation Labs. Our mission is to make responsible innovation to the essential mindset and operational norm for technology innovators, startups, and the venture investors who back them. I think some interesting facts, at least in some surveys, more than 70% of operators and investors say they think responsible innovation should be the norm in the industry. Obviously, many don't know what that looks like in practice. And when you combine that with at least in the US alone, more than $100 billion going into tech startups from venture on an annual basis, even in a down market, there's quite a bit of leverage there. So our goal is to stimulate supply and demand for responsible innovation and happy to chat more about that today. Thank you. Great. So let's start with the first question on definition. So what does technology in the public interest mean to you guys? I can start with one broad view, which is the notion that technology should be developed with and for and by all people that it will impact. So not just the people who are sitting in some conference room in front of a whiteboard thinking about what technology could do, but thinking about what technology should do, what it can do. So again, like I was saying about leveraging tech for good, there's a lot of that in the public interest space, but also thinking about how do we think differently about the entire ecosystem of development of technology so that it is all in service of, or at the very least, not actively harming people through what is being built and deployed? I think that's a very good definition. You know, I mean, Katie, I think took a sort of like broad view of it. I think maybe what I'll do is kind of like zero in on the slice of technology in public interest that I focus on. I think there's lots of ways that technology can both advance and undermine public interest, but I spend a lot of time thinking about how it advances or undermines democracy. And you know, I, when I step back and think about like, what is the greatest threat to, let's say American democracy in 2023? You know, to me, it is not a physical invasion on one of our borders. It is really the deterioration of democracy from within as a result of lack of trust in public institutions, greater polarization, conspiracy theories, disinformation, et cetera. And so that is where I am really interested in both the ways in which technology are giving rise to those problems but also the ways in which they can potentially solve them. I think historically, the intersection of tech and investing in democracy was really focused on civic tech and potentially defense tech, depending on how you look at it. Neither of which really, you know, sort of address these kind of underlying dynamics. So that's the aspect of tech and public interest that I'm really animated by, but I think, you know, there is a sort of broader way that tech interacts with social systems that Katie spoke to, so. Our spin on this question, I mean, just in a prior life, Katie and I got to work together on public interest tech and I think let's, at least from my perspective, I will not speak to the dot org, the dot edu, and the dot gov. So I think you can think of those three domains. I'm not gonna focus on those for the purposes of today. So in terms of the dot com part of public interest tech or tech in the public interest, for me, I would say there's like three tiers if you could think of it that way. Like the most basic is like, don't be a jerk. I think that's in the public interest, but it turns out that's like a high bar for some. I think a second tier where we are really focused is I think Katie, to your point of understanding the impact of your product and service on stakeholders outside of your shareholders. Ideally, you are not dumping externalities and harms on people on your way to, you know, grand wealth in the name of innovation. There's a purpose for that, but ideally you're not doing harm. And then I think the third thing that the private market can do is build technology that advances human progress or solves a human challenge. That's like the ultimate maybe in what we're trying to do, but most companies may not see themselves yet on that spectrum. So I would say one other thing that I think technology in the public interest does is ideally there's no competitive advantage for disregarding what might be in the public interest. And just one example, I was speaking to a startup in Seattle where I'm based, they work in the field of synthetic voice and they were talking about competitors of theirs that essentially have no ethical constraints on how they develop. And you can see the marketplace giving that company, that other company an advantage. And it's gonna be hard to serve the public interest where the market incentives don't reward that kind of behavior. That's gonna take us to the next question because technically, innovation is something that we don't know what's gonna happen. So it's really hard to regulate or to understand. So tech is the only industry that can launch something in the market and see what's gonna happen. And also in the investment side, everybody's pushing money first and putting pressures to entrepreneurs and hey, you're not building fast enough. And we know building a tech, it's tough. Putting an ethical tech takes a lot more time. So who's gonna lead the process for tech for good and who's gonna lead putting technology at service of society first? Who's gonna lead that process? Who should lead? You are, you're gonna lead. Who's gonna lead? I mean, I think this is, I think piggybacking off of there was a session yesterday in the same room at 430 on systems change. And I think it was Jeff who said, there's complex and complicated and the tech venture ecosystem is incredibly complex. There are multiple nodes that we have to think about. So I think, I'll let Zoe speak to the venture side. I think we're interested in building our organization for founders. They are the ultimate builders and operators. They are the heroes often of the tech sort ecosystem. And also the ones who are resource constrained and struggling every day to keep their companies and dreams alive. And we wanna build resources for them that are practical. I think there's venture, there's limited partners. So we all played different, we wear sometimes multiple hats, but I think we're looking to those three audiences right now, founders, investors and LPs primarily as nodes of change. You know, I think systems level change obviously requires leadership across a whole variety of different vantage points. But you mentioned limited partners, LPs, which are the funders of venture funds and other types of funds. And I do think that LPs do have a certain amount of leverage and ability to really move the needle because a lot of the incentives throughout the system flow from whoever is the holder of capital. And maybe I'll give a quick example. Well, you know, as I was raising my fund, I found that a lot of the LPs that I spoke to who had ESG or impact mandates to both be very intrigued by what I was doing, but also often reacted in a way that was like, hey, we don't really, we only have a couple buckets. We have a climate bucket, we have a workforce bucket, we have a racial justice bucket. Like we don't really know what bucket to put you in. And because of those frameworks and structures, it often meant that a strategy that was sort of outside the mold just wasn't eligible. I've spent a lot of time thinking about like, why is it that democracy or digital rights doesn't usually have its own bucket? And I have my own theories about that which we could get into. But you know, I think if limited partners say, hey, ethical tech or responsible innovation is something we wanna get behind, and here's the ways in which we're gonna deploy capital against that, I think that does make a difference. Yeah, and I think the only thing I'd add is, you know, it is a broad ecosystem and there's also a place for consumers to demand and to think differently about the products that we use and the sort of things that we want to see enter the market and to sort of vote with our actual actions in that regard. And then also obviously there's a role for government to play in terms of how we figure out regulation that doesn't stifle innovation and I don't think that's an impossible challenge. I think there are a lot of voices on either side of that issue that wanna make it seem very all or nothing but there are means to actually create both incentives for ethics and responsibility and the sticks that punish people for not thinking about negative externalities. Our chairman likes to sort of talk about this as when you think about pollution in the oil industry or in people who are polluting waterways, we figured out ways to make them pay for that. We haven't necessarily figured out how to make tech companies or new technologies think about and account for the pollution that they are putting onto the world, particularly because it's sort of an intangible pollution like destroying democracy. So what are the tools you see that it can have like if we put them to make them accountable to make them work on this? Like what do you see the tools they can use to make sure that the technology they provide to the society is right and ready for that? And for the investors and for the founders like we put a lot of pressure on the founders but as you said. I'll start on this one. Funny you should mention this. Responsible Innovation Labs has just put out or will be putting out responsible AI commitments and I'll just say that, let me bookmark that but I have to plug our work of course. But I think the bigger picture for us is Silicon Valley, Venture as an industry. I think two things can be true. One, this is like 40, 50 years of habits and yes there's like the accountability side of it but there's also like just habits and these habits happen to have delivered enormous wealth and innovation for a small number of people. And so I think there's a lot of people of goodwill who want to do things differently and to your point to your question they need tools. So I think we're thinking of our responsible AI commitments as a set of practical tools for founders who aren't building the frontier models of AI but need a set of really simple, I would say simple simplified methods in their product development and particularly we need to help venture investors with their governance responsibilities and sometimes it's as simple as like what questions to ask like reminding people that it needs to be part of the cadence of your onboarding or portfolio companies in the venture industry. So I think it's looking at the existing inventory of behaviors and trying to like insert things that we think actually are not like eating your vegetables although I like eating my vegetables that these are good things to help with growth in the long run. I think there's also like when you think about the traditional sort of accountability structures that exist it's about your sales goals, your traditional ROI growth on paper. I think there's an opportunity when you think about the tools that exist and the frameworks that exist to measure impact, to measure sort of other intangible outputs that are not just about sort of return on capital for investors to make those a part of what they're requiring or what they're looking for. So in philanthropy I'm required or I choose to look at impact across many dimensions and in the nonprofit sector that's of course a more recognized way of doing business but I think it's applicable to any sort of investment opportunity or any kind of relationship that you're building is to say, okay well what are the dimensions of impact? What are the places that this could actually have a positive or negative return and what do I want to hold people accountable to? We've seen in ed tech which I think is often sort of divorced from the conversation about technology more broadly and left in just the education space but there's a ton, a ton of money in ed tech being spent on things that have absolutely no demonstrated value to students but they're able to sell big contracts to districts and so they keep getting more investment and that flywheel keeps going and we're left with students who can't read and that doesn't serve anyone other than a really small slice of the investors and the creators of these tools who may mean well but don't have any framework for what impact should look like or what student outcomes should look like so how are we also partnering to change some of those habits to say well what does it look like to bring in best in class education experts to help me identify what sort of impact this product should have or environmental experts or anyone who's not just inside the bubble of the traditional VC investment or traditional sort of financial models for this work. The only thing that I would add is that I think the tools and resources both for founders and investors are more robust in some sectors than others. So I would say at this point in climate tech there's pretty sophisticated ways of measuring your impact and there's some consensus around how to do it and also what to prioritize as you build and all those things but then when it comes to things like measuring access to information or censorship or surveillance or disinformation like we actually don't have a lot of metrics and measurement or resources for founders or investors to navigate those issues so I think there's also a lot of white space to do some really good work there that would be helpful to all the different stakeholders involved. And on that one like for example for me I have to say it because we're building a tech for it. Yeah you should be commenting here. And the situation we see is that we could have lunch like investors are asking us to produce the app in less than six or one year to produce to get the app app what we can do but if we want to do it ethically we need to add a year and a year and a half more before it goes out. But the response you receive as a is hey you're slow, you're not efficient or let's put the ethics after let's see if this traction and let's after that figure it out the ethics behind it. So how do we manage this side of the conversation? How can we keep the founders or the owners accountable now? So how can we put the investors accountable on this? What is the good side of accountability and what's the dark side of accountability on this? I mean I think it can be very hard to assign responsibility or accountability after the fact I think one of the ways that I try to mitigate that is thinking about mission alignment up front on a bunch of different levels. So first on the product side what is the possibility that this product can be used maliciously or repurposed in a way that it wasn't intended to et cetera? So it sounds like some of what you're trying to prevent as you build but also on the business level does the business strategy itself reinforce some of the mission and values that we would like to see out in the world or does it sit at tension with that? And then really also on the founder level especially in early stage startups like the only thing that's guaranteed is the team and even that maybe is not guaranteed. And so you really also have to have sort of really deep alignment with founders about what kind of business you wanna build. I think that's the best way to ensure that there's a measure of accountability or responsibility but that does have to happen on the front end. And you can't do it without patient capital. I think all of this requires to as always earlier point LPs and others to think differently about the time horizon for sort of tech development and what it looks like to bake in ethical models and what it looks like to test and design with different communities and for people not just for outsized profit. And that is a really big mindset shift but it's one that really should be occurring across the industry really across investing in general that we have to be thinking about these things from the perspective of their potential negative externalities, their potential to harm democracy their potential to harm the environment not just from the short-term perspective. So of course this problem is bigger than the tech industry or just the public interest tech. It comes back to capitalism and stakeholders and who's being rewarded and held accountable for different things but I think we can try to model that because tech is traditionally developed so quickly because new ideas are a dime a dozen and we have so many innovators in these spaces. How can we in public interest technology and the tech industry more broadly kind of model the behavior of incorporating ethical considerations, incorporating these notions into our work and demonstrating that you can move at the speed of just sanity? Yeah, I think two thoughts. One, just coming back to the point that we made around LPs, limited partners. It's to my knowledge and maybe there are people in the room who can correct me on this but I don't know that many particularly institutional LPs are holding their like let's just use Venture as an example or private equity. There's not like a responsibility thesis per se. So if you're allocators of capital and you are not asking about this then I'm not surprised that investors are running off and putting pressure on growth at all costs. So I think there is a role and LPs, I have heard of LPs using ESG questionnaires for Venture which feels like a really crude tool and not really built for what tech startups are in fact doing or have the resources to do. So I think there's again green space for LPs to innovate there. Although it was one LP reminded me they themselves are compensated on return. And so like everybody's in this spiral where nobody wants to do anything because they're compensated in certain ways. So that's one set of things. I think one question, just to maybe spice it up a little bit that I'm struggling with, Marco sorry, is relates to, I think one of the challenges that responsibility advocates have is the minute we say, I think responsibility is perceived as always meaning slowing down. And I think that triggers a set of unhelpful emotions from many of the people we wanna bring along in this work. And I'm struggling and challenging myself to think about how do we communicate the importance of these values in ways that are not like clashing with some of the values that make the innovation ecosystem quite valuable for all of us, right? Speed is one of those things. We wouldn't have mRNA vaccines if we didn't have speed and there's imperfections and there were challenges with that. So I just think, I think it's important for us as advocates to think about how we talk about these things because it doesn't necessarily have to conflict with other values we hold dear. I have one more question, but before that please like, now we're gonna open the floor soon for you guys. So let's make sure that we have questions and challenging ones so like we can move forward. So what I have is, do you have example of cases that like good best practice or cases that you have that succeeded like concrete cases that comes up to you that worked out? In terms of products? Yeah, in terms of a company that succeeded, a tech that succeeded in this process of being good for society. Rather than, I mean, I can give some specific examples but I think at least in my domain I end up thinking about some of it as being kind of reactive and defensive and then some of it as being proactive and you sort of need both. And so I'll give you example like on the reactive side, we have invested in a company that does deep fake detection, right? And so that really is about being defensive and trying to identify harm as it is happening in real time. And then there are tools that are being built that are more on the proactive side if that's what you wanna call it where maybe an example of that is a tool around data ownership that gives end users more control over first party data or where their data lives on the internet that hopefully should reduce the possibility of misuse or harm or targeting et cetera on the front end. And so I think there are different ways of tackling and approaching those problems. They both have challenges, but that often is a way that I think about it. And I think there are some, I mean, if you look at to go back to ed tech and health tech as two spaces where I think there's been a lot of responsible innovation and successful companies that have been sort of thriving even inside of regulated industries, there's I think some really good models for how you can do innovative work sorry, my phone thinks I'm exercising. You can do innovative work and have it be meaningful, impactful to people and their lives and not just sort of financial return but still achieve strong financial return on reasonable timelines. Two, I'll share one sort of macro example and maybe a micro example. One thing I've been thinking about a lot, I'm now two months into this role and I've been reading a little bit about what I would call the, or I think what is called the quality movement of the, let's call it the second half of the 20th century where a mix of regulation, obviously there's litigation but also quality starts to become synonymous with good management and you can't have a reputable company if you don't have sort of a quality performance management plan in place and in the same way, certainly self-regulation is important, we believe there's a role for industry to play in setting some standards and we will need our policy partners in government, we will need our civil society collaborators, we will need the capital allocators, we'll need all of these people coming together to hopefully make responsible innovation much more of the norm in terms of how you do things. So I'll leave it there. Great, we have one question. Mark to come. What lessons from the existing fields of impact and ESG investing can or should be applied to public interest and responsible time? Ooh, spicy, let's call it spicy. So one thing in the interest of being spicy, what I will say is I think a lesson we can potentially learn is around defining impact. One of my criticisms of the impact investing space is that you can enter by just saying that you care about impact without it having to actually mean anything and in ESG I think we've had some definition of what ESG standards are, but I think a very soft approach meant to be a really sort of investor friendly, financial industry friendly model for what ESG is. So I think in public interest, I'm, when I talk about this and sort of introduce the concept to people, I often say like public interest technology is not about reputation washing, it's not about how you can build seven products that create harm and then say, oh but we commit 1% of our company profits to public interest tech and therefore it's okay or give your employees the opportunity to do civic leave like the sort of healthcare.gov fix thing and say, look, aren't we spectacular? It really has to be about it being part of the DNA and has to be about what you're doing as your primary activity that actually makes the difference not just how you glom on to something additional. And I think we've seen that become the case I think particularly with some of the ESG washing that companies have done to seem more altruistic that they might actually be in their day-to-day activities. You know, it's been interesting over the last couple of years to see the uptick in interest among kind of venture in Silicon Valley in responsible AI and in issues around digital rights and things like that. And yet very little or I would say, yeah, minimal efforts to establish real ties with like the very deep research and activist communities that have been working on privacy and data ownership and digital rights for a long, long time. And so I think as venture and tech starts to wake up to some of these issues as part of ESG frameworks or otherwise, there's a lot to be learned from folks who've just been working on these issues in other domains and academic domains in advocacy spaces, et cetera. I think the caveat is this is my first SOCAP. So I'm part of this experience, it's been great just learning and being a part of this community for the first time. I think what I'm curious about or how I've interpreted impact investing is it's sort of like it's over here and it's sort of a different set of things compared to maybe sort of quote unquote like other types of investing. And I think for responsible innovation in some ways, I think we both have to build the kind of community that SOCAP represents in this space. Like that's incredible, we have to do something similar. And also I think I go back to the stat I cited at the top, which is more than a hundred billion goes in just in the US alone every year in venture. And I think it's gonna be really, really important for us to try to influence those players and not sort of put responsible innovation in a side category because I think that is doing a disservice to the sort of the change we seek. So I think that's the tension I'm experiencing as I'm consuming the content here. Reinteresting question, what are your thoughts on Mark Anderson's the tech optimist manifesto? Some wanna give John texts on it? Yeah, so if you wanted to. Go ahead, go ahead. Mark Anderson, he's a big deal in that he, what he invented Netscape right now, am I remember? And also has like a ginormous venture fund of his own. He, this week, maybe last week, he published a blog post, The Techno Optimist Manifesto. It's a very long read. If you have some time, you should read it. I would say he believes that technological innovation, I'm trying to do justice and fairness, is best when it's sort of unfettered in terms of constraints. I think that's how I would describe it, but there's a lot of nuance to it. There is and there isn't. So I do wanna think about it. Yeah, I mean, I had a couple takeaways, but one is that, especially as it pertains to AI, I would say in particular, it feels to me like there's been these two camps that have emerged and it's either like, we need to halt any AI development, bomb data centers, go the extreme because this is an existential threat to humanity and we need to act immediately or it's like the unfettered innovation without any constraints, et cetera. And I think what I'm waiting to see is like, what is the positive vision of AI development that is in neither of those camps? I'm waiting for Goram's principles that are the responsible innovation labs principles, which maybe we'll guide the way. But I think that that, I think this piece that was put out like speaks to this sense that there are camps and that it's not necessarily, you know, one big tent that can be unified. And yeah, I think it's indicative of kind of a larger moment of like where the dialogue is in technology spaces. I think a lot of it's just wrong. It's, you know, I think it's really easy to be a techno utopian from a lofty perch. My personal bias is I'm a technopragmatist. I think tech can solve certain things. I think innovation and sort of development and all of the advances that we've seen through technology have been really meaningful. But I think to give technology in and of itself sort of all credit for kind of saving the world or creating the world in which we are best suited to live is a really narrow and unsophisticated view of the sort of complex systems that make even that very innovation possible. I think it ignores a lot of the inherent privileges that come with being able to develop in this place and at this time and by this time, I mean the last few decades in particular. And it really, I think does a disservice to the broader conversation about what we need to do to create sort of equitable and equitable tech future, if you will. And so I think it's sort of disappointing and it feels a little bit like it is so, so utopian that it undermines its own point because if we acknowledge the nuance and the difficulty and the things that we're trying to do that we're not quite perfect at, I think we'll get a lot further in that conversation than if we just put out absolute belief in technology above all things and at any cost. I think can I just add one maybe controversial view of it, which is like responsible innovators also want growth. They also like to make money. They like happy customers. I mean, I think part of the challenge for responsible innovators is not to rebut Mark and recent line by line because it just, it creates distance between and a comparison, which is like, there's, I would like to narrow sort of where there's legitimate disagreement. And also, man, if I got to Mark, if you're out there, like responsible innovators want a lot of the same things, they believe in markets, they believe in innovation and it's unhelpful. I think he is trying to create distance and I think responsible innovators exacerbate that distance in an unhelpful way. And also techno utopians like privacy too. I mean, it's not like they don't want standards. So I would just encourage us to not sort of like take the bait. And as a tech for good entrepreneur, I say like, yes, we're for that, but not at whatever price. So, and I think that's the issue is like, how far are you ready to go for your principle and values? In our case, I'm ready to kill my technology, not get the wrong investment because what obsessed me at night is to make sure that this is truly tech for good and that just, as you said, washing tech and keeping good. So one last question we have here. Will AI tech and investment mimicry aggregation render traditional entrepreneurs and investor obsolete? Could you say that again, please? Yeah, I'm French. I'm from Quebec, so like reading this, it's also trying to understand it. So that's why I'm gonna read it again. Doesn't mean I understand it. Will AI tech and investment mimicry aggregation render traditional entrepreneurs and investors obsolete? I think that I understand the question and it's a question about like, whether or not AI will replace, folks in the workforce specifically in investing in entrepreneurship, if I understand that. I mean, I think I'm like less. A while ago, I was like, watching the demo of a new product that's like an AI co-pilot assistant for financial modeling. And as somebody who started her career as an analyst at Goldman Sachs and was like glued to my Excel and had to build these models quite obviously very manually, I looked at that and I was like, oh my gosh, like if this had existed, however many years ago like this would have made my life a lot easier. Who knows, maybe that would have meant that there were a few people in our analysts that I'm not sure. But my hope is that it also allows more time for like higher level thinking and actually like stepping away from in this case, the numbers on the page and asking questions about what it is you're looking at. So that's my optimistic take on like the possibility of like replacement. But curious what you guys add there. Yeah, I'd say I'm relatively optimistic, surprisingly enough on this point too. I think there will definitely be a wave of like AI generated ideas and sort of replacement augmentation that happens. But I think in the longterm, we keep innovating, we don't just go away. I was sitting at a table at an event at MIT with a bunch of like neuroscientists and computer scientists together and they were debating like how the world will end which of their technologies or advances is gonna make everything explode. At the end of the day, I think humans are incredibly resilient and we innovate and we create and we do. So yes, I think there will probably be a lot of typical products and services that change or are replaced or that don't have as much value for humans to be doing them anymore. But ultimately, the thing about these, particularly the sort of AI that we see out in the world now like these large language model driven chatbots and similar things is like they're working with what we already did. They're not working with the stuff that we haven't come up with yet. Like that's still inside of the human being. So there's a lot to be mined from all of human knowledge and history if we're using these tools for that. But I think there's also a lot more to be created. The world is changing constantly. We face these incredible crises and climate and democracy and all these things. Like there's just so much to work on that I think will come out of the other end of it. Yeah, I wanna have drinks with whoever wrote that question. I was like, ooh, that's big. I'm struck just thinking about like the process of innovation. I could see an algorithm going to help you make more money in sophisticated public markets where there are gobs and gobs of data. I think like the act of innovation is judgment of what is the human or business problem that needs to be solved. And I'm not an AI. I'm quite optimistic about AI. And so perhaps that can shorten that timeline but somewhere there has to be a judgment call of what is worth building. And I'm not, some future computer scientist will prove me wrong that the robots can figure that out too. But I just like, there's still room for that judgment. And I also think capital allocators, I'm curious to what extent they will wanna abdicate their fiduciary responsibilities to non-human algorithms and programs. So I think there's some rules and norms that may not quite render us useless. I also think so much of entrepreneurship is like based in creativity and breaking the rules and kind of like pursuing an unorthodox approach to problems. And that's something that at least currently machine learning isn't always particularly good at. So what's interesting is that the question we have is like getting into our closing because the question we have is a closing remark. So I'm gonna go with it as we have is what is with your hope? What gives you hope? I think what gives me hope is that I spend time with a lot of founders who are quite young and have grown up as digital natives and have a really deep understanding both of the possibilities of technology, but also the risks in a way that I think maybe previous generations did not appreciate or maybe we were too naive or like who knows. But I think like that gives me a lot of optimism. There also seems to be a lot of open-mindedness to things like shared ownership models or like new ways of thinking about building companies or businesses or communities or projects, right? Like we're even using different nomenclature and different tech ecosystems. And so that's the piece of it that gives me hope. I think the untapped sort of grit and energy and enthusiasm of people who have not been sort of historically represented in or had access to this industry and to the sort of capital that will allow them to pursue their ideas and this sort of this sense, this buzz that like we are all relatively dissatisfied with the way things are and that we're at a bit of a tipping point when it comes to wealth distribution, when it comes to these big questions about the kind of society we want to be living in. I think that actually gives me hope. Some people are like, oh, it's hope for a revolution. I think it's hope for just a change in attitudes, mindsets, norms that make doing business the way we used to do it just impossible, right? I think that there will be a generation of leaders and hopefully myself included that are demanding that we look differently at what it means to make progress. Plus one to younger people and emerging managers and emerging entrepreneurs. I think the mindset is different and hopefully we build the ecosystem that's worthy of like their values alignment. And I take some hope from the fact that this is in the 1970s and 80s, like this is a collection of human behaviors and we can chip away at those layers with different human behaviors. It's gonna take some time, but it's a decentralized system and we can make progress sooner rather than later and over the long run, we'll hopefully get there. I will add like any entrepreneurs are starting now with the awareness of what we learned in the past are taking that. So plus one on the young entrepreneurs, but also plus five to the other entrepreneurs on the field right now and doing change. So we're going on the closing. I don't know if you have more questions, but what is the one takeaway? We spoke so much about many topics. Now what is the one liner? Say guys, if you leave, keep this in mind. What is the one line and you want people to keep it? I think we have to raise the floor across all of these things. Like the baseline has to be way higher than it is and I'm really activated around making that happen. That it's not just about what we can do at the fringes, it's about making sure that the baseline is that people look to innovate responsibly to develop things that make sense in the context of like society, not just in the context of investor returns or their own personal wealth creation. I would say like, you know, tech is not a panacea and when we pretend that it might be, I think we're both deluding ourselves and potentially opening ourselves up to a dangerous path. And if we really want to see change, innovation is part of it, but so is policy change, so is regulatory action, so is research, et cetera. And I think a lot of the problems that we have been talking about are, they're ultimately human problems with human solutions and we need to keep that in mind. I'll just come back to responsible innovation as mindset and operating norm. It's not a category of what sits more of a essential how and I think where people in the room can think about their own leverage to making that mindset more present and the operating norm were durable, that would be incredibly helpful. What ideas, now what technology we can do that can help what we're doing, like what technology or AI that can generate AI can be used for public into that you will, how can help. Well, this isn't specific to the domain that I work in, but I'm really excited about the possibilities of generative AI for art and music and creativity generally. I think there was a moment in which mid-journey and Dali and these image generation tools came out that people were like, oh my gosh, art is dead, artists are no longer needed or something like that. And I, both of my parents work in the arts and so I grew up around a lot of artists and I just kept thinking artists have been using tools and machines and digital tools to create really fabulous art for generations. And if anything, I'm really excited to see what human artists decide to do when they harness some of these tools and adopt them and incorporate them into their practice in different ways. And that to me is not a threat to artists at all, it's just another tool in the toolkit. So like that's the sort of thing that I'm excited about and I think that is in service of public interest because it's in service of the creative commons. I should go to Chad GBT and punch this in, I haven't yet, but actually just really selfishly, I think having generative AI computationally like help us think through how do you monetize responsibility or the lack thereof? Now in sustainability, you can tell a company the actual literal cost of not doing something and I think you see that as shifted behavior. I think sustainability is now not just the right thing to do but it's like future proofing your company and I would love sort of the computational helper if there's a great consultant in the room who wants to like pro bono work perhaps, we have no money, is just how do we, I think we have to make the case, the business case, not just the values case, that responsibility is the right thing to do and also like how do we measure it? I think you and I, what you and I are talking about this today, right? Like just it's really, really hard to measure some of the sort of proving the contrafactual, right? Like that's what it is, right? So anyway, so those are some things we could use and if you do this, put it in the public domain for us. Thanks. Yeah, and I think there's still, I mean, this is not a new challenge but I think there's still a lot of open space in sort of building healthy communities online as I'm sure you know, and that there are not a lot of open source tools, not a lot of public domain tools that are really focused on helping people do that. And in the meantime, there is an entire ecosystem dedicated to community moderation, to detecting unsafe images, to looking at these things and their jobs that have a really heavy human psychological tool and there are things that we could be doing with more sophisticated AI that we're not necessarily invested in because the big companies that have the biggest problems don't actually have any monetary incentive to solve those problems because they tend to actually drive more clicks. And so if there are tools that can be built to help create and promote healthy communities online, if there are things that we can do to help, with solving for those concerns in a way that makes it easier to scale also like new platforms for community connection online, I think that'd be really awesome.