 Hello, everyone. My name is Mickey. I'm the digital marketing manager here at Parsons TKO and we are a digital transformation consulting agency. I'm so excited to have all of our panelists here today to talk with you all about leveraging AI tools in the think think factor. And with that, I will hand it off to Nate Parsons to start our introduction. Thanks, Mickey. Hey, everybody. It's great to be here. I'm Nate Parsons. I'm one of the co founders of Parsons TKO and I'm a recovering technologist myself and have been following the artificial intelligence news with great interest in reading up a lot on it and I've been, you know, starting to do a lot of my own writing and thinking on, you know, where I think AI can fit into the mission driven sector but and I think a lot of us are all sort of struggling with that problem and you know I'm joined today by some very insightful and thoughtful guests who are going to give their perspective as well and we're just going to explore this question together and so I may hand it over here to Fuzz Hogan to introduce himself and then Alessio Bayak can introduce himself after that and then we'll have Stephen Wayne who's going to represent PTKO on this panel discussion so thank you all for joining us. So that's me. Hi, I'm Fuzz Hogan. I now work at CNN but I spent seven years at New America think tank in DC. And that as the bio said, had journalism in its DNA so sort of a natural place for me to be as the head of the head of their comms unit, since I had a long history of journalism. So that's my background I've got a guy I've much longer bio but I'll save it for now and toss it over to Alessio. Hi, I'm the now six months into a director of data visualization at Urban Institute, which is a 50 plus year old research nonprofit research organization or anything tank doing a lot of data driven analysis on issues of social and economic policy. And I come from a long, long ish 12 year career in journalism mostly science and data journalism and most recently at USA today. I'll pass it to you, Stephen, thank you. I'm Stephen bird Kruger. I'm the head of the data strategy practice at Parsons TKO. We're a consultancy working with mission driven organizations and in my work in particular we focus on data, not just the collection and sort of technical operations of that data, but really the strategic side of it how do we identify new uses new purposes for data within an organization, and then really thinking downstream about how data affects business processes in in mission driven organizations. A lot of my background is in the think tank sector I spent my first seven years at a think tank in DC. And so really have a lot of in my heart, a lot of empathy for folks in the think tank sector the knowledge sector, and recognizing that data is more than just the numbers data is also the knowledge in its many forms data is represented in the research that gets produced. And so thinking about artificial intelligence as a whole new way to operate on that. And, and to use that data and workflows is tremendously exciting so I'm really glad that we are able to get this group together for today's discussion. Thanks, Stefan. It's great to have you all here and you know we'll start off by sort of, you know, sort of a little bit of exploration of where I could fit into the nonprofit and mission driven space and you know I think one of the things that you know when a new technology comes out there all sort of wondering is like, can I use this you know and if we can, you know what are the safe places to experiment with this while we both like learn how to use this technology effectively and also manage the risk that this new technology might have in it because it's immature and it's still evolving and trying to figure that out and so you know I just sort of open this up to all of you. I'd love to hear your thoughts on, you know where and how you know artificial intelligence might fit you know into the think tanks work or mission, you know, in these early days and where you know you're sort of exploring that right now. I'll start. I, and I probably will give the dumbest answer so it's good that I start to the say the best for last is it seems like at this early early stage, it would be scale or depth and depth, but not audience facing that makes sense is the sort of summation of where I would come down and you can get it to do things that a lot of people would have to do or people in places. You can't get to for safety or logistical or deep under you know whatever. You can't be really wary at the general at the, you know, 10,000 foot level of ever. It's the cliche or cliche yet but my internal cliche is it's an army of interns, right and some interns are terrific and should be public facing but you can't guarantee that 1000 of them will be so you want to have that same sort of clearing mechanism you have with the interns like summer fantastic and go right out but you want to make sure you're looking at every, the work of every intern before it goes out the door. Yeah, that that that sounds right and actually like that kind of metaphor you know this idea of really holding our horses on public facing kind of work and collateral that we would want out there to be produced. Even if it's Q seed, right by by these kinds of tools. Definitely, I think I stepped a little too far. So to, to fuzz's point, you know, we've been looking at at least internally on my team so I run the data visualization team, you know for kind of internal processes and I would imagine coming from, you know, journalism right where we are, you know, they are, embarking on experiments with not so much robo journalism creating you know 1000 stories about the latest COVID numbers or baseball box scores. But also looking at internal workflows where we can make shortcuts right we can kind of extract text from PDFs or you know those kinds of wrote tasks so that's where I think we could all agree. That's where a lot of the experimentations probably headed at this first stage. And I, you know, I think for me, I think our perspective on what I can do has been skewed a bit by, you know, especially the, you know, the news lately chat GPT the whole GPT ecosystem. These large language models, where the focus has been on look how quickly it can write all this content. Again, and so sort of we naturally get gravitate to the can this you know do my homework for me get the blog post written get the social media post written. But for everything that we publish there is so much that's happening internally. And so I think the real focus needs to be on what are those internal processes where these tools can play a role. And even, you know, take something like chat GPT it's good at creating stuff but it's also really good at just being an interrogative partner. It's good at as good at asking questions as it is answering those questions. And so using these tools as a way to to give give everyone a mentor, you know you can give every single person a tutor you can give every single person. A partner who's available 24 seven to sort of push and pull and and just new ways for us to recreate and reimagine some of our internal workflows with this new partner that we can have that we can bring into that that process and those questions. Yeah, my mind's already worrying away with lots on this you know I guess one thing I might ask you know given that sort of positioning of an army of interns and be able to like sort of reduce the labor internally. You know what are some of that areas of like analysis or like operational workflow that you think you could sort of set this loose on you know and just step into your point of be curious, you know, do you all feel comfortable with the idea of critiquing somebody's work using AI or offering, you know, a counterpoint to somebody's work using AI I mean I think it's really interesting example. You know because one of the things I've seen just my own research with AI is that it's very good at knowing what the lay of the land is if there's a lot of published material on something so it can surface the pro and con arguments of an undisputed you know something that's been debated in the media. It's not so great at original critique in my experience but it does do a good job of servicing what's up there but I'd be curious to hear where you all think it can help and you know, you know, you know those interns can be effective. The one thing that I again I don't want to keep going first but the one thing I remember when I first arrived at think tank where I did work was and I think this is true of most think tanks is the life cycle of a comms professional is shorter than the life cycle of the policy analyst. Right, so it's almost like use it to know your own self. Right, you can never catch up if I if I arrive in your seven of a policy program. And I want to do typical comms block and tackling hey we said something about the six years ago. The expert is not thinking that way for various reasons right, but you are. It's almost like can this kind of AI tell you about your own organization. And again, just to repeat the, you would then go back to policy expert is did we say this in 2014. And I use this as kind of currently is this current or to steal what you guys think right, it's sort of bouncing with step and said, it's almost about a bit of a to know your own know your own work and surface that to you internally and again screen internally and then push it back it seems like one quick opportunity for AI to help surface stuff internally. Yeah, I think that's a, it's a really interesting use case, particularly because it's right it's gated it's kind of behind that that public internal face firewall right what you would be able to do some of that qc. And, you know, I don't think of it as too different from hiring data scientists or someone to build models or tooling to kind of assist in that regard. Why should we be relying on something that is then going to be using all of that corpus that we've just fed it. For whatever products it's you know we're going to be built on top of it. So I think that's a discussion we need to have in the think tank space. If we're going to kind of load in that data, but I agree that that use cases is valuable right you know we should have that data basing kind of search, you know, you know, surfacing, you know important keywords trends entities, etc. You know, I think I'll, yeah I agree with, with both of those perspectives. You know, I would round out the answer. I think yes there's a lot that's very exciting about the current capabilities and that's the other thing is, you know, the capabilities that we're seeing right now, literally didn't exist, you know, in this public way, even six months ago. And so what it's capable of is changing so quickly when you look at something like chat GPT. I actually I think it does a better job at critiquing than I know Nate your, you know, when your your experiences when you were testing it. In particular I think it's, it's not bad at self critiquing. If you ask chat GPT something and it gives you an answer and you're like what is wrong with that answer it very quickly picks out like well this is this seems implausible what I said. And, and this is why, and those answers are better. You know, right before we started unless you were talking about the idea of auto GPT where you can have the model chain itself and push itself I mean you you can use these tools and these iterative ways that improves the quality of what you get out of it very quickly. And, but I'm, I'm, I think I'm excited about this moment because it's getting everyone to think about AI, you know, chat GPT is new, but AI is not actually that new this has been a field of research for a long time, but it has never felt accessible to us it has never felt like something that we have either the expertise or the budget to tap into. And, and I think this is showing us that these tools are a lot closer, you know, like you can log into a website and boom you're using artificial intelligence right now. And, and I think it's, it's, it could open the door to a lot of much, much more boring uses of artificial intelligence that think tanks, things like classifying our content. Things like, you know, you know, being able to expand your taxonomy across all of your content because we all know we don't do a great job of that. And, you know, we always miss something and sometimes forget about it for weeks or months at a time. So you can really expand taxonomies and expand taxonomies beyond just the website, you know how can you use the same taxonomy and your email and your social and your funding funding appeals, and your and your petitions. So that all of these things you have this connective tissue that you can start to have more comprehensively. Maybe like nuts and bolts behind the scenes but, but makes a lot of new uses of content and new uses of data possible when you start looking at all the little opportunities to use AI. Yeah, it's a fascinating time you know I was just thinking it reminds me a lot of like the early days of like the search engine world where like people were first starting to realize the web was too big for anyone to have like a list of things and then people were trying to figure out how they could find things they hadn't found before and, you know, people were trading links and things you know it was very archaic compared to today in a lot of ways but, you know, sort of similar kind of period of time for AI and you know I think generative you know AI in particular. You know I'm curious like with this idea of like critiquing yourself and getting to know yourself as an organization and you know all those sorts of things sort of assume that the AI is very familiar with your content and I'm really curious that you know do you all have thoughts on, you know what is the role of think tanks is sort of, you know, people who are funding and curating expert opinion on various topics and various issues like what's the sort of ethical role of helping improve, you know artificial intelligence systems, and you know, you know, giving back to the community in some respects, you know, given that these are still owned by, you know, large commercial entities like Google and Microsoft right now. You know, and it's a little bit similar I think to like the debate over Facebook when it started to get big and a lot of nonprofits started to move their messaging groups and things on the Facebook you know you're seeding a lot of control and curation to these, you know commercial entities who have different goals and opportunities but you know the communities have a heavily leveraging and benefiting from these tools at the same time so I'm just sort of curious to hear your thoughts on, you know what is the role of the think tank in that ecosystem. Well, I'll jump in here. Maybe I'm not hyper qualified to be suggesting what that sector should be doing. But it does seem like we're at this time right where we're in this phase of, you know, the, you know, substantial growth like the velocity is insane right now, as are obviously the opportunities and a lot of just, you know, media coverage and excitement across the I think that there has there have been folks who raised, you know issues of regulation obviously you know you can't you can't also go a day without hearing about the need for that just like when we were hearing it around, you know, big tech or or any of those issues. It's, it's, it's a little disconcerting, in my opinion that we're not hearing more from those in the knowledge economy about the red teaming that we need to be doing and by that I mean we need to be poking at these models and seeing what the bad case scenarios are what the worst case scenario is how things can be interpreted. It's, you know, nothing to say nothing of say jobs that are going to be replaced talking about literally knowledge jobs that are going to be replaced, or edged out, how are we kind of ensuring that whatever our expert opinion is on on a certain matter or whatever our policy program has shown is that not going to be completely displaced by some model relying on some data, which we have no kind of view into. And so that's where I think we need to situate, you know, folks from the academic space and the think tank space, as well as obviously the regular regulatory space to be thinking about, you know how we put up guard rails. Yeah, I think that that's just just to say the same thing in a different way and categorize it like traditional think tank space is research. Some think tanks feel comfortable with advocacy. So she talked about right so that's the, just that the highest level that's what you want to be doing is what's happening, and what should be happening. Of course convening as a nice think tank role, right just sort of bringing people together to again to address those kind of problems. Yeah, I do think my favorite example of stuff and says, AI has been around for a while, right. There's a book called automating equality that a new America fellow Virginia you banks wrote a while ago there's a lot of she's not the only one, there's a lot of examples of states, you know, turn to technology to deliver to assess and deliver welfare benefits and I don't have to, in 2023 I don't have to sort of tell you the end of the story like didn't go well. Right. And so, and the good news, frankly, and unless she was absolutely right the good news is, what easier now. You know, with Maria Ressa and Facebook and we're seeing all these. It's not hard to wake people up I think in 10 years ago it was kind of hard to wake people like hey there were risks here like, Oh, it's going to be great. Right, so there's an advantage, at least politically sort of I guess in a small P way and with funders you're seeing a lot more funders I think I've got haven't been a think tank space for three years but I assume it's a lot more awareness and know a lot of the funding is from tech and former tech people. That's a gotta be careful who you're pitching to. But I think those are all, you know, positive tailwinds. A lot of those tech people are some of the most cynical and concerned about the implications of their creations that we should know. Yeah, exactly the very, very, I think there's a lot of awareness there. I mean, so many good things to respond to I think that point about regulation. Is a good one, because I think I think we know that we cannot rely on that regulation is such a lagging indicator. And, and so much I mean the red team example is is really, I mean it's happening it's happening in the wild, like so many people are actively pushing these tools to do bad things. And that's making the headlines, you know, as much as the capabilities are. And yet we, you know, we see these worst case examples but we don't have real centers of excellence yet. Where are the groups that say we have found the safe path, like we know what these guard rails are. And I would love for the think tank sector to be the place where this happens because they have the mandate, they have the mandate to use tools use processes to extract knowledge. I mean, I think one of the place one of the places I thought you were going to go to is your question was, what do we even need think tanks anymore, if we have these models that can like do all the work can process all the inputs can come up with the insightful outputs. And, and I think, I think, you know he says cynically. I think that the role of the think tank is to be that place of convening I think you said it precisely right fuzz and it's a place where you can create frameworks for how policy analysis occurs. You can have these frameworks for what are what are the common goals that a whole bunch of thought leaders are trying to achieve in any one moment. I think it's a I think it provides this really useful vehicle for making sure that we're doing safe and responsible things with our policy analysis. It's it's that ethical layer that we can layer on top of the ability to produce an infinite insights infinite findings. There's there's that level of judgment and curation that that we put on top of it. I mean, everyone's lost in thought here I can see everybody worrying on that. Yeah, just to follow up on that I mean I do think we're we're in an interesting time in the information economy overall you know there's been a lot of economic incentives over the last 10 years to move from an open and free sharing of information to these like closed gardens and you know that's happened for very variety of reasons some people want, you know cleaned up and curated and you know, official sounding you know commercial entities want to hold on to the profits and their ability to advertise to people and you know manage that user data that they're gathering from having this people there. You know, and you can see that one of the challenges that you know chat GPT in particular as an AI system is going to bring is this idea that you go to a place, and you get an authoritative answer to a question. There's no competing. Other information showed you you know in the current, you know, way that people often look for information if you search and Google, you see, you know, pages of results and sometimes you can see counterpoints within those results and so it's not that you're seeing just one perspective at a time, but with chat GPT you only get one perspective and your answer you know you ask it a question you get one answer back you don't get 10 answers side by side and showing different angles of things and some sort of curious to, to that extent like, you know, is AI a competitor to think tanks in terms of policy analysis and influencing people and you know what is sort of the challenge between having, you know somebody go to a think tank directly for that information versus, going to one of these systems that in theory is aggregating information from all over the internet and from anything tanks, but maybe is giving, you know, incorrect answer or an inauthentic answer or a bias answer in the worst case. That's, you know, programmed by somebody behind the scenes for their own, you know, political objectives. I'm wary of speaking of CNN but just as an example, like, we're CNN, and we're not YouTube. Right. If you're equating news organization with Google. That's the audience's problem. Right. And so if your brand as a thing tank is we're going to full analysis screen whatever you can, you know, cohesion around the great wild information. If you're equal to chat GPT then that's a problem for the think tank but it's also problem for the audience if they're actually not seeing a distinction. Again, it goes both ways it could be the think tank problem and they're not providing qualitatively better, you know, content than a chat GPT. The audience is the same then that's the problem. It reminds me of the kind of data and information and news literacy debates right where, you know, we can we can kind of impose a lot of, you know, red flag and extra context and try to kind of preempt what, you know, people are going to interpret from a single thing right be it YouTube be it Twitter be it parlor, you know, or, or, you know, a news site. That's, it gets back to the kind of crux of that debate which is well whose responsibility is it to kind of guide and shepherd society into finding information crucial to a functioning democracy right so that's, that's really harsh and and and a hard problem. What it reminds me of is chat GPT is such at least in, I think, the way most people are experiencing it, a very kind of linear or like, you know, kind of two step or I'm not planning right but you're basically just inputting and outputting inputting and outputting. That's not how knowledge is created that's not science works. That's not how the UX should be set up. So, how do we try to do what think tanks are great at what news organizations are, you know, hopefully great at, which is providing nuance which is shedding light on not just the black and the white, but the gray in the middle. I think that we're seeing from a lot of the outputs that you're a definitive answer is, is, is oversimplifying. I mean, yeah, so well said issue and knowledge, knowledge itself does not have inherent value, like knowledge is valuable valuable in its application, a knowledge is valuable when we use it for a purpose. And so I think when you're looking at think tanks you have to look at the whole apparatus of a think tank, like a think tank is it's not just the scholars, you know, churning out PDFs. There's been lots of lots of studies on this, you know, the value of a PDF by itself. But, you know, I think in this era, where knowledge becomes cheaper, you know, let's just call it that. The value of a think tank is in the GR team. The value of the think tank is in the combination of the government relations and the comms function and the convening and the events function. It's in the fact that we can use this knowledge with a purpose and we can use this knowledge as a part of a community with a real focus towards some sort of, you know, societal impact. So I think it's everything that's downstream of that knowledge, in addition to the ability to produce and curate that knowledge in the first place. This, you know, this doesn't dismiss the value of that research, you know, that is the foundation of all the other work. But, but yeah, I think it is it is it is really important that we look at the entire integrated apparatus of a think tank as what retains value in this increasingly competitive knowledge landscape. Yeah, I mean it's it's just challenging I think when these these technologies come in they're very disruptive you know and I think there is often a push and pull between like good actors and bad actors in terms of how they're used and you know I mean just even how they end up sort of settling into the marketplace I think you know requires a lot of active stewardship for it to be in a good place. You know, maybe along those lines I'm really curious like, you know, if you were to sort of fight fire with fire like, you know what can AI enable think tanks to do differently or allow them different sorts of engagement with users. You know that they might not be getting today you know one of the simple ones I think about is that, you know, we do a lot of interviews with people and will often use AI to summarize some of the key points or tell us where the points of conflict is in a conversation you know and it does a very good job of saying, hey in this information I can summarize and find some things that are interesting to you and it's not a guarantee that it gets everything but it's dramatically cut down on the labor and our ability to, you know, use those conversational, you know tidbits more effectively and usefully in our own work and so I'm sort of curious, as you all think about using AI internally what are some of the ways you might want to, you know, give audiences and new experiences that aren't possible today either through labor constraints or you know, like physical construction constraints or anything else so you could see it really benefiting from because they do think it's one of those things where, you know, if it's one sided it won't turn out the way anybody wants you know. Well, it seems we need to use it as a force multiplier right so in various kinds of applications right from research design and kind of assisting in data collection data analysis. And mentioned earlier classifiers right you know we could be kind of using them to augment the kind of expert skills that we have right in in in our fellows and in our analysts right who are say coding massive data sets along categories right. So that's suddenly a multi class classifier could we feed this to AI we've already been, you know, people have already been experimenting that with that before chat GPT was released. And I also think of things like you know real worst case scenarios where you have a whole spider of tens of thousands of fishing emails being sent out. A minute right to be sent okay so so how do we use that kind of or imagine that kind of system being applied to how we gather knowledge around a specific research question right, could we use this to, you know, in instead of other other systems that are a little, are a lot slower than that. And so that would be a kind of a positive spin on that kind of capability. I think I'm building off that but to say that the other, like can it go another challenge for a comms person is, especially at a, you know, place like my work where there are new programs a lot is, I didn't know the audience that this person was trying to reach, or they didn't know the boundaries they had some so can chat kind of AI go here what that audience is talking about. Right. In a way that is either I'm a kid bull or it's a time saver, or just an understand the audience's needs. Otherwise wouldn't be able to reach or hadn't reached before I have no history with. That question which had just occurred to me so I haven't mentioned as a researcher is can you ask these tools to approach the research from an audience perspective. Right. Can it, can it either as it looks. Right. I want you to look through this data set as if you were a character who, you know, or two to the things that can talk about earlier. Can you deliver it to an audience of. No, this I need legislators this time. No, I need school superintendents this time. No, I need people who live in East LA. Right. Can you have it. Those are things to think about is can something going to do that kind of scale. 100%. I mean that's like one of the big, you know, the new field that's emerging is prompt engineering. How do you prompt these AIs and that's like one of the the classic AI tuning steps is you start with anything you're going to say to chat to you with pretend you are a blank pretend you are a legislator pretend you are a sixth grader pretend you are a grandmother. You know, whatever you want to do and then it it adopts that mindset in the way it uses its language. And so so yeah extremely tunable but I mean I think you're touching on the real thing here which is its ability to process tremendous inputs at speed. So you think about this AI as a part of your social listening paradigm, you think about this AI as a part of your, you know, let's say membership model where you're trying to curate something for a long time member who's had all kinds of interactions with you over the years. And again, you were just hired on, and you've got this like 20 year relationship that you're supposed to steward, or as a fundraiser doing the same thing. You know, the ability, you know, those types of research activities are typically measured in in months, if you're trying to do it for a whole population. You know you think about focus groups and surveys. What's the turnaround time on those when you're trying to do something today and you know let's say a new cycle, it just doesn't work for real tactical operations. But with AI you can do things at these tactical speeds like something news just broke. What's going on this morning and you pop all this into a model, and you get these sort of summary outputs and then you pop in all of your content that you've ever produced. And you say which of that content is most relevant. And let's try to surface that it's like a super powered and much more nuanced internal search system that you can develop and external search. So yeah you can stitch together a lot of things once you start looking at all of your systems, all of your audience engagement touch points as you know it's data, but you know in a much more rich qualitative and nuanced sense, but you can start to leverage. Yeah, I mean that makes me kind of wonder you know I think there's, there's a sort of push and pull that I hear in that in the answers here which is that you know on the one hand, it does have this incredible ability to scale your expectations in certain ways you know can do many things very quickly, it can slice and dice things you know it with nuance and with fine levels of detail. You know that might tempt, you know me as an outsider to say oh well maybe this could be a really good ask the expert tool on your website or something where you know you can ask it up question and it would respond but you know yeah I see you shake your head. I think you're probably the same thing I am which is like, if nuance is important for asking the question. Maybe it's going to be tough to get it to provide expert answers that yeah I just kind of love to hear your thoughts on things like that because I do think that's going to be something that people start selling right I remember when chatbots came out and everybody's like there's a chatbot on their site so you can ask you questions about things but then pretty quickly people were, you know, disillusioned with the quality of the chatbot experience but I'm really curious where you all think you know as AI comes in and there's pressure to use it to create custom data sets or to respond to a particular query is how you might think about deploying it or not. This is where I did an email to the group I was an absolutist and now this while I will return to my absolutist stance. Hey, this these tools should not be public facing for at least. So the next time we have this conversation at least a year away, I would not just too way too risky the organization to if you're promising the world that you are telling the truth and you're never wrong. Don't hand over that brand to a robot. And if you're telling the world that a robot can do our job. What are you saying about yourself. Yeah, I mean agree right it's like that internal external right like this, these should be pressure tested at an internal kind of processes and shortcuts and force multipliers but for sure, you know, be very wary of, of releasing things into externally including right things that may have been copy edited and produced internally with this assistance. That stuff probably needs an extra layer or two of of editing. I wouldn't really trust any of the literature review that it had been doing or any of that yet. And the other thing is just to be the journalist out here and so you I'm sure it says anything unless you that at some point, you have to these two things are related yet to be transparent about if these tools were involved. Right. And so, and it's one of the checks like well, we have to say this. So since we have to say this do we want to tell the world we did this right and if you don't want to tell the world it is then don't do it. It would be innocent way that can imagine if I thought a little bit probably will be innocent ways to say like chat to help us come up with these things, but, or did this research but you have to do it if you're comfortable telling the world you did it then you can do it but if you're not comfortable telling the world you can you did it then you shouldn't do it. Sorry. And yet. I agree. I agree completely with everything you both just said, I'm going to play devil's advocate for a little bit, you know, qualifier, I agree completely. And yet, people are going to do it. Both organizations are going to do it we are going to see chat GPT powered, you know, knowledge assistance. You know, clippy on steroids. But it and I think beyond that. The audiences we are trying to influence are already doing this. They are already using chat GPT as an alternative to finding the resources that we produce as a sector. So, so, you know, recognizing that competition, you know, it's it's it's adapt or yield to these alternative sources. So, so we have to face the tremendous pressure that we are going to have to modernize the user experience, make that knowledge better curated more accessible. And, and these are the best tools for doing that right now. So I think I think recognizing that challenge is really important for think tanks. You know, what we're doing, we're not doing it in isolation, we're doing it in a very shifting digital landscape with with shifting audience expectations. And this is where I think it is so important to have these guardrails because recognizing it's going to happen. You know, I, you know, as I said before, we could you can throw all these things throw all your data into these systems. I would never trust, you know, whatever version I'm on and whichever platform I'm accessing it through whether it's open AI or Microsoft's. I would never trust all of my organization's data, throwing all of it into that platform whole cloth with, you know, who knows what the terms of conditions are today. So you have to really be thinking about what are the different ways we can access these capabilities. You can have your own version, your own, own model, where it is your data on your platforms and it's not going to anyone else. So I think there's a lot to think about in terms of how we engage it. And then I think most importantly, you know, we, I agree completely. We're going to put chat EPT on your homepage and a little bubble that people can talk to you right now you're taking an unacceptable brand risk. But how can you test that capability to know when you've gotten good enough at using it where perhaps it is safe enough. And that's where those internal use cases are going to be absolutely essential. We have to have these cultures of experimentation internally in think tanks. And if we're not playing with it, if we're not, if we're not really pressure testing it, you know, as you say, you know, the red team, you know, trying to see what are the how far can this go before it breaks how far can it go before it hallucinates something that is damaging to our, you know, our brand and reputation. And that's where I, you know, I think having an internal chat bot, you know, whoever is going to be responding externally should have the tool we don't yet trust the public with, because they will be that last gate that last filter. But I think it changes it changes what the day looks like for a lot of think tank staff, if they are going to try to keep up with these technologies. And it back to transparency again it's another opportunity for this conversation like I can think about the journalism sphere and I'll just stick to think tanks fear is your funding transparency right you transparent who some think tanks are some think tanks aren't even in the category some are more transparent than others, and using transparency as a brand attribute. Right. If you are one of those think tanks that you think you're the whitest of white hats, right, since saying like we are we have a funding table you can tell exactly but not only what they fund but all our papers have a funding. You know, who's that funding this paper that kind of thing. And so if you're the kind of think tank that has a really strong position, either, whatever, along the whatever part of the spectrum run, say it. Right, and start to delineate a link along the think tank ecosystem. These are the think tanks that say this, and these are things things that haven't. Right. And so you start to say like, well we're that's where the kind of think tank that is transparent bodies of chat you can see those are not right not to be super competitive because it's a nice, generally a nice world. But so if you start to live in that word this deafens right there's going to be pressure. Right so the way to react to that pressure is to sort of be transparent about whether which choices you're making. So the audience knows what to expect when they come. And then they can see therefore the ones that aren't transparent. Whether that that they trust them or not anyway, that's their choice as an audience member. Yeah I mean I think that that's so important because I mean, in a good example that is a wired magazine recently put out an article where they explained what they're going to use chat GPT and other generative technologies for and what they're not and it was, you know, I think it was really illuminating because they just sort of said hey here's how it's going to be and you know you can, you'll know when we do this kind of thing that you know ai is involved or it isn't involved and. You know, I think one of the things that that's always been true in the US in the modern times has been that we give technology companies a pass on ethics and on decorum. You know, and I think that you know this is another place where there's some obvious things, especially with things like you know chat GPT that we all know things should do but we're not enforcing in the community I don't think is putting enough pressure on these things to do which is, you know, and in most literature it's common practice to cite your sources, you know, and to say like hey I got this information from here, and that's not a common thing in chat GPT responses that very rarely links you to the sources that often will make them up. That's the term. This guy did this thing and here's a footnote like invalid footnote. And in science that you know things change and in policy things change and chat GPT rarely puts the date of where it got a position from to so it might be something that was, you know the 1970s position instead of the 2023 position on a policy issue or something like that and so I do think that you know is is being really public to about the things that we expect and want these technologies to do and they're like put pressure on how they're actually designed and developed because I think like one of the things that's happened. You know with Facebook and other things is they kind of started out feeling innocuous to society, and then they became so large but they were entrenched with a lot of the ethics and rules and ideas that they were going to, you know, expand into from the early 20s and I do think that there's an opportunity here for think tanks to put a lot of pressure on these companies publicly to like say hey here's how these answers should actually be structured even and here's what, you know an ethical response looks like and, you know, I think that's something that you know if it doesn't happen, it won't happen you know it'll continue to be the way it is now where it tries to, you know, get away with as little as possible here because I think that'll be cheaper and easier for everybody, you know. Yeah, we've actually yeah we're making great price we've got some really good questions in the q&a so I might throw a few of those out there for the team. One of them it was interesting sort of around the red teams was, you know, are there is there work that needs to happen this year to sort of combat the inevitable like, you know, AI powered misinformation campaigns that are coming with the you know presidential election and the other things that will incite a lot of actors to get out there and start using these technologies to like do even more quote unquote fake newsing and you know trying to public manipulation like are the things think things should work on this year to sort of help prepare for that. Yeah, I have an answer, but it's actually based in work that I know unless you has done. I think there is a media monitoring and social media monitoring function in terms of active, you know, we're doing active combat with misinformation and the think tank sector is is a big part of that. There are of course think think tanks that specialize in this, but given that this misinformation is going to happen across all issues it's going to happen across all, you know, subject areas. It's going to just accelerate a lot of the combating myths and misinformation that has been happening. I think I think all think tanks need will have a responsibility I mean you are guardians of your issues. You are guardians of the public mindset around your issue. I think I think everyone needs to have some function where they are monitoring this and can we use AI to do that faster. You tell me unless you. Yeah, yeah, well you know I mean I think that in my brush with natural language processing, not natural language generation. You know I had been looking a lot at the social media listening space and other kinds of, you know, corpora that you can, you can start to try to pick out trends. It becomes it's easier when you know what you're looking for obviously it's much harder when you're saying give me something that kind of sounds like this. It's untrue. Yeah, right like and we know it's very difficult to kind of predict what the next wave of misinformation strategies going to look like. But I agree with you that, you know, those that are guardians of that issue, want to get out ahead of it or they want to be probably clued in as soon as possible, relying on these kinds of tools. It's not necessarily Jake chat GPT but the system that it's built on right learning about how to kind of ingest data at that scale. Maybe you're you're partnering with consultancy if it's this close to your business model. I mean, I know that, look, with the birth of social media came, you know, a thousand flowers bloomed right of the social media tracking and marketing analytics firms, you're going to probably see the same thing right with how do we tune these large models to your business case, and in the same way think tanks we're probably we're probably going to see that kind of kind of grow, and that could help us respond more quickly to those issues of misinformation or new cycles that we want to peg to. Yeah, I got the same thing that more of the think tanks can pre but all right, I think we're learning that pre but all is as effective not something more effective than coming after fact checking as I mean we do a lot of fact checking here. But I think the thing tanks role would be, we see applications here and again back to the convening, you know, gather some of the political operas and say what would you, how would you see the use this. If you were nefarious and start to publish and talk about those use cases so people are ready for both journalists, and probably more journalists don't think it's kind of hard for a thing tank to reach a mass audience, but sort of get ahead of that game. But can I help those think tanks get in front of journalists. That's another question. It's the, if they I can pre pizza, then that's right. I think you there's a there's a piece of this that I mean it has to be said, and I don't know how many of our listeners are coming from the philanthropy side of think tanks. But this, this is going to be a new capability that think tanks need to develop in whichever part of it using it internally operationally using it for that external sort of situational awareness using it for the content generation using it for the rebuttal because all of this is new. I mean, I mean I want to thank fuzz and unless you for being here. It was hard to build a panel for this, this conversation, because nobody feels like an expert yet. You talked about the and I built up the prompt whisperer, right. Yeah, yeah, yeah. So for the audience out in the world like, what are the, as far as you can tell what are the adjacent skills. Like the liberal arts major who can like you know shape the language as a human or is it a more of a tech, like understanding how the technology works person or is it somebody who heard I can't remember the name of the person or the publication but I read, I read an interview with someone who who is now a prompt and and she was struggling in that job of figuring out how to get the AI to do just the right thing. And her boss explained writing prompts for AI as as casting spells. So it's, it really is like you use exactly you know just the wrong word or just the wrong, you know phrasing, or say in the wrong context and you're going to get a completely different outcome. It's very much an art form as much as it is a science and sort of, you know, learning how these things are responding and learning how they change as they grow in power. You know, it's something it's something very mystical about it. But yeah, yeah, it's, you know, we, I feel like I feel like now I can anticipate wordles mind. Exactly. Yeah, yeah, that'll be yeah can can can chat to protect the next word Lord. And the point I was getting at is these are going to be new capabilities that there isn't existing staff there isn't existing staff time. Think tanks are going to need resources to do this. And so I think there's going to be a big change. There is an enormous need for funding. And again, it's not just for think tanks that are focused on this as a problem. I think all think tanks need to be able to steer into this capability. I mean, we got to open the floodgates right now please if you're if you're here and you fund think tanks, talk to your think tank about what they need to lean into this as a new, you know, field of capabilities. We can't. Yeah, we cannot hide from AI. It is a part of society now. We cannot put this genie back in the bottle. We cannot close the box out of which Pandora rose. I don't know. I just want to tag on that a little bit there just like I don't think there's really anything too new under the technology sun like as much as you know we advanced things in chat GBT and AI seem novel. I do think we actually have a very well developed skill set for prompt engineering it's just in a place most people don't think which is research librarians have been learning how to do this for years and years with card catalogs and you know looking through publications that don't use the same terminology and things like that and I remember there was a great crisis in the library world about 15 years ago where many librarians were seeing the downsizing and you know their libraries maybe the digital collections where people never came to the physical space and most of those folks and I'm trying to go into the user experience world and using their taxonomy skills and things like that to do like content operations work and things but you know if this had just come a little earlier, I think a lot of folks would have gone into this because I do think there's an extremely high overlap between getting the most out of like, you know, chat and you know generative AI and you know these kind of large language models and research library and skill sets you know and I think that that's one of the things that's you know I think hopeful about you know these things is that we actually do have people who could help us use them better. You know just like a lot of disruptive technologies they're going to move industries and we're going to have to rethink and reclassify and be open to their expertise in a new place you know but I do think those folks folks are out there and I just don't think they're often hired by the organizations that are facing these challenges and I think that's one of the big you know societal upheaval moments here. Yeah so we're closing in on time and we've got I mean a very active conversation going both here and in the chat which is great so I will say for the questions we don't get to today we'll try and pull some answers together and send them and follow up follow up emails so don't worry if we haven't gotten to your question today. And one thing that I might let you lead us out here in the last couple minutes is there was a question in the chat. How should people sort of, you know, think about using AI who are skeptical due to the risks or the biases like what is a safe way to experiment or explore these technologies and I kind of wonder if you all have some of the your own learnings or thoughts on that like how can somebody kind of dip their toe in this in a way that kind of lets them get more educated maybe even before it becomes a you know line of business tool for them. I mean, these guys have done more than I am but my own personal experience is just play around, just like if there's a lot of opportunities that I don't think you're changing your Amazon or Apple algorithm if you do these things me maybe you are but like depending on how you ask it, it's the only risk really is you change your identity to the marketers of the globe. But generally I think there's lots of opportunity to simply just go to chat GBT and type in some stuff and see what happens and to achieve the journey how the journey goes. Yeah, I mean I you know I think yeah the question was how do you pitch using AI to folks who are skeptical. You know I don't know if you're trying to pitch people internally because you want them to just get started and get using it or this is about pitching your funder. I do think that keeping up with the Joneses is, you know it's a pretty strong argument, because it is I mean things are accelerating the, the expectations of us and our roles are going to shift as the technology continues to take root. And so I think that's a piece of it. I think you know if it's just will will something bad happen if I use it. You know it is important to remember it is still in a box you know these tools, you know by itself it is not integrated to all kinds of things that are going to, you know delete your database and and do other you know dangerous scary things. So it's in a bit of a sandbox still. And I think it's just important to to make the promise that we will try to use it responsibly that we have spent time thinking about this we have spent time thinking about the risks of just copy pasting. We have spent time thinking about the risks of doing anything that's public facing and promising that we will only use it internally. Really, I think writing down the rules of the road for how you intend to use it and how you want people to use it so that you can just directly address those risks and point out where you're avoiding them and the way that you're going to use it. I think that's a that's a really important thing. I think coming at chat GBT with a purpose. You know this is how I hope it can help us. And this is why I think this way that we're going to use it will be relatively safe. Hopefully, hopefully answer. Well, yeah, I'm not sure we have time for another questions I think it may spin us off into future things but I want to just thank you all for joining us and you know fuzz and unless you especially we really appreciate you sharing your thoughts your insights with us and you know we will be sending a follow up email to everybody who came today with hopefully some more answers and some more thoughts on questions we didn't get to in the chat but yeah it's an open world and you can see the discussion is just starting so we hope to continue to think on this and bring people together to talk about it and explore together and hopefully, you know, put some some pressure on this tech companies to do this in a smart and ethical way where we all, you know, benefits the society from these technologies. So, yeah, thank you. We appreciate everyone coming.