 Hello, everyone and welcome back if you have joined our previous sessions and welcome if this is the first time you're joining I'm joined here with Lucy from full story. Who's going to be leading our next session. So without further ado over to you, Lucy Thanks, Ruth. I'm really excited to be here today with y'all Again, I'm your presenter today. Lucy Hong and thanks for spending the next half hour with me a Quick primer on who you're about to listen to I'm a product manager at full story your go-to spot for understanding your digital experience On a personal note though my career has been focused a lot on health and safety privacy integrity risk You name it basically I spend a lot of the time I spent a lot of my time thinking about all the things that could go wrong prioritizing what to work on first and shaping the policies and procedures within Organizations to manage that risk. I'm also based in San Francisco and I've recently adopted a cat who I had to lock out of this room. So Yeah, just a little bit about me and Then yeah, here's a primer of what we're going to cover today managing risks and AI technologies So here just will know that all opinions are my own except when I pulled in some headlines to highlight What's going on in the industry? It's a very fast Changing place and if you've probably noticed recently It's probably hard not to notice all the tremendous advances and AI especially machine learnings Generative models so today we'll talk about those advances in machine learning and what AI governance frameworks You can apply to manage your risk those private user privacy and ethics So to kick it off here are two anti goals that I don't want you to come away with from this call Number one, I'm not here to fear longer But that doesn't mean that there's a very real risk that we are responsible for as shapers of product and messaging this to the market And our customers Secondly, I'm not a machine learning engineer or a lawyer But I am here to talk about the risks of AI and that should show that even you can start to contribute to your Organizations policy procedures governing AI So yeah, we'll get into it We'll start with a little bit of history in 2022 We actually saw a really tremendous advance or spurt and those generative models and along with that There was surprisingly open distribution and access as a high level primer on generative models These are different from discriminative models that were more widely used in previous aspects of data science So discriminative models there these class of supervised machine learning models that make predictions by estimating Conditional probability, we won't get into math of it too much, but TLDR. They can't generate new samples It's more of a if this than that logic Use for classification tasks where we might use x features to classify to a particular class y One example is email spam. That might be a simple yes or no label for this email inspector That you're building Then now we've moved on to the era of more generative models Which are a class of algorithms that make predictions by modeling out joint distributions There are a lot more steps involved here to take the probability of the class and the estimated distributions But again the TLDR they take input training samples and learn a model of dole that represents that distribution So again taking that email spam example Generative models can actually be used over time to generate emails that could even fool the email inspector So the twist is that over the time the generative model could graduate fool a discriminator or that email? Yes, or no spam inspector. We've talked about and that's what we're seeing today in more recent advancements if you take that specific flavor of generative models We have large language models or LLMs that use deep learning and neural networks Such as like chat GPT We also have text to image models such as like Dolly that incorporate computer vision and natural processing We've even seen text to video Projects come out from meta which takes a little bit further than text image There's a lot of I think really interesting technologies that I would urge you to try out here And then now we'll kind of go into one of the initial risks One thing you'll notice with this presentation is probably that I don't have a lot of images And that's because one of the risks I'm going to talk about is copyright So earlier I mentioned that the distribution of these technologies was surprisingly open We'll take the analogy of cars first of all because I'm assuming that everyone has driven or ridden at a car at some point in their life To take that car analogy further Everyone has to get a driver's license to make sure that you're qualified to drive You have to understand the policies and procedures of the road There are also different types of licenses to show that you have knowledge of the specific vehicle in addition We have seatbelts and speed limits to protect yourself and others from harm There's also signage on the road. So that provides notice and transparency and With the democratization of generative AI we're actually giving these cars to a wider audience than ever before But here at the driver's test is optional For example chat GPT how many folks have tested out the open beta there If you're familiar with mid-journey another text-to-image service They're actually available by a discord server bot that has millions of users and Personally, I'm all for the wider spread use of AI and access by different audiences But we need to recognize that there are guidelines required Where are the seatbelts and speed limits and who's volunteering to use them for generative AI? There aren't a clear set of guidelines for the purposes a generative AI today how it should be used and how it can be measured and Honestly, this isn't much of a surprise given that the US is already one of the largest countries without significant federal data privacy laws To take it back a little bit Most organizations actually found that the onset of GDPR actually helped them build a clear more distinct organization organized around managing Consumer transparency and privacy and as frustrating as it is for us to probably see all those cookie banners today We've still raised the tide for all ships and humans on them with that set of standards a Deloitte survey found that 40% 44% of consumers felt that organizations cared more about their privacy after GDPR came into fruition And even now Europe is leading the way with the first proposed AI act Which is the first set of regulatory frameworks for AI governance So yeah, I think today we're seeing that folks are being caught being given cards about a seatbelts and being told to drive and explore generative AI and with this great power comes great responsibility and your Organizations and you should include that within your AI and product strategy Now we'll go into the copyright piece that I touched on a little bit earlier So here you have a headline from New York Times where an AI generated picture won an art prize and artists aren't happy So here a digital artist actually entered into an art contest in Colorado in the digital arts category and won $300 first place, you know some good coffee money there. They actually used mid-journey, which I talked about before It's a service that's available on a discord service that provides text-to-image renderings So here the digital artist actually took these renderings from mid-journey And did make significant adjustments to these images photoshop until he is satisfied Enhanced the resolution of the images using a tool called gigapixel and ended up submitting these pieces on canvas They're now listed for sale at $750 apiece based on their assessment of fair market value With all these advances in technology The question comes up of what makes AI different from a camera that captures the presence of someone else's creation and here The answer is copyright This is a significant headline for Reuters because it documents one of the first decisions by a US court or agency on the scope of copyright projections for AI created works, so Here an example is relating to images in a graphic novel that were again used by that same AI system mid-journey The US Copyright Office actually ruled that this the images in this graphic novel should not have been granted copyright protection granted the text that the author made is but nothing images and Though the author did mastermind the prompts for this text-to-image generation of the images the author ultimately did not create these images So how does it relate to you and your customers? Takeaway number one your customers will be under scrutiny for using AI tools and services that you provide So how can you protect your customers from that risk? Takeaway number two You need to avoid the use of these protected datasets or find ways to partner on the world to some of them Otherwise your monetization strategies will be impacted I suggest partnering directly with craters or the sources of these training data to ensure that your value chains are protected And as a note the space is constantly changing Just last week we had some notable names signed on an open letter That called for a six-month ban on creating AI that's more powerful than GPT for which I believe is Open to access as of today So is this open letter employee for certain founders of open AI to solidify their lead in the market? I'm not really here to comment on that really my intentions are more to highlight the risks of AI on society as a whole And how this has also been recognized by leaders and the gap of what we're actually seeing of action to address that So this open letter says that AI systems such as GPT for are now becoming human competitive at general tasks And there are these risks for such systems that could be used to generate misinformation on the massive scale as well as socioeconomic impacts of potential mass automation of jobs Again, it's not my intention to fear monger But it is important to at least be aware of the risks such as the prevalence of deep fakes and Additional tools to generate AI There's just more tools available for bad actors to use so your trust safety integrity All that specific labor teams they'll also need to up level themselves to understand and combat malicious use of these tools And if you're in this space already, you're pretty familiar with how quickly Bad actors and fraud rings can level up. They're really agile So here are some examples of what to consider In the risking fraud space take identity verification For example, submitting driver's licenses or documents that are submitted. These could also be generated by AI or certain PAI pieces as well We also have social media the prevalence of bots is already a huge issues Imagine bots having access to these large language models that are able to replicate human speech and language and communication at a higher level than before Extremist groups or other unsavory characters might take advantage of these tools to further their agenda on your community platforms And then last example is financial services Here we are able to think about maybe account takeovers or these scammers that are tricking Unsuspecting folks to reveal information with really sophisticated pre-generated chat scripts So now that we've covered how the unexpected uses of AI By bad actors might induce risk we'll move on to risk management frameworks. How can we address these? And yeah, today we'll walk through the initial traditional risk management frameworks And then next we'll talk about how this evolves of AI governance So this is following guidance from the u.s. Department of Commerce's national institute of standards and technology These y'all are probably already familiar with kind of the essential activities to prepare your organization on how to manage security and privacy risks One categorizing the system and information that's processed whereas it's stored. How is it transmitted running an impact analysis on that? Selecting controls to actually protect your system based on that initial impact analysis and implementing and documenting those controls We'll also over time need to assess And audit if those controls are actually in place if they're operating as intended and producing the desired results We also need senior officials to authorize the system to operate and then And lastly continuously monitor that control and communication and any additional risks that might come up for your system For all this you really need a higher level set of policies and procedures to guide Your organizations where software can't ideally you have all three of those though To tie this to analytics and some of the ai talks later I think really common themes that you can pull out here are that data classification piece and also Establishing prominence of that data. This is where we'll go into a little bit more next I'm going to take a break for water So one thing I want to bring up is that there's not really any significant regulatory frameworks Regarding ai governance today And it's more likely that the EU will get to this first like they did with GDPR In fact, I think the EU has already proposed the first set of regulatory frameworks called the ai act Look it up. I really recommend reading it Some recommendations are on governmental action in that space are to establish new authorities or agencies that are capable of tracking And overseeing the development of advanced ai and also the large data centers that are used to train it There's also potential recommendations to watermark or establish the provenance of ai generated content And then lastly liability what happens When ai harm is caused and Additionally support to increase public public funding for ai safety research so Here on a separate note from those recommendations in the a act will actually get into the ai risk management framework This is version one that's been shared by the same national institute of standards and technology Again, I would recommend keeping abreast of the space because it changes incredibly fast Gbt3 was a talk just a few months and now gbt4 will soon be available to users as well So yeah, again, this is just version one the really the goal of this risk management framework published by The nst is really to cultivate trust in ai technologies Which is necessary if society as a whole is to widely accept ai The core of this framework really describes four specific functions In the center you govern map Measure and name it around it. This is to help organizations address the risks of ai systems in practice And we'll talk through each of these functions and how they're applied in context specific use cases and throughout stages of the ai life cycle I think at the center of this again is really building those initial policies procedures processes governed You want to make sure that a culture of risk management is cultivated and present within your organization So ensure that they are able to Manage understand and document the regulatory requirements involving ai and being able to tie that specific tactical policies procedures and steps within your organization and taking that a step further tying that to the actual product experience and product design You also want to ensure that there are mechanisms in place to actually govern an inventory of your ai systems And as you're doing this as you're building your ai governance team Make sure it's an adverse team with diverse skill sets background skills, etc I would recommend maybe something you could start as soon as next week is to host tabletop exercises Encourage your teammates to try out chat gpt try out dolly and build a muscle for this type of thinking of how These types of tools might be used and how they might be governed Next we have the map piece of this which really ties things to context How do we recognize the context of what? Risks matter the most for example take gambling and the entertainment industry. It's probably super okay to talk about gambling Propose products and features around it, but in other certain cases Um Gambling could be a more sensitive not suited for work type of topic So that context really really matters and by understanding that context you can develop that intended purpose benefits Norms in which ai can be deployed and documented You also want again to and this map function Define these specific methods used to implement the task that the ai system would actually support And here for example, you would want to at least Outlines say hey, this is using a classifier versus a generative model versus more of a recommender Being able to define those specific methods is really important As a part of map function again, you also want to develop internal risk controls for the components of the ai system And keep abreast of any third party ai technologies that might be used as well Uh lastly for this map function, you also want to address the privacy and the provenance of the data used in this creation For the managed portion of this framework here, this is where you can provide your p.m. Mindset Assess what risk exists what to prioritize and how to act based on that projected impact Lastly, we have the measure portion of this which is really talking about What are the ways that we can enumerate those approaches or metrics of the risks of adopting ai You'll want to regularly assess the appropriateness and impacts on affected User groups or communities. I would recommend pulling in domain experts and users to be consulted for their feedback So hopefully this all resonates with you. It's not something that's too dissimilar But here it's really important to outline those functions that we'll get to the next slide So how will your actions translate? What are we protecting against? One, we want to provide respect for the original creators and artists Given lack of copyright protections for ai generated works today You want to make sure to protect uh to partner with those original creators to actually protect your value chain and your monetization strategies Secondly, um protection of privacy and ethics Like what I mentioned in the previous slide, you want to carefully select the initial data used to train these models to avoid Including toxic or biased content and make sure it's originating from a source that has given Their consent towards area data being used in this way and uh being provided proper notice and the abilities to pull out if needed You also want to be careful about hey how ai tools might be used By bad actors for example as threats against democracy on like community platforms used uh in financial services By scammers so again invest appropriately into your trust and safety teams and you can start today By even just encouraging these teams to try prompt engineering themselves. It's really important to develop a familiarity with this technology Lastly it all comes down to reducing risk for our customers and society as a whole So some strategies here are rather than pulling off the shelf generative ai model You could consider building out smaller more specialized models that are tuned to your needs of your organization I would also recommend keeping a human in the loop to make sure that they're checking out the output of generative ai before it's actually published or used And then lastly to really make sure that you're able to responsibly use ai and also build familiarity with it I'd recommend avoiding using generative ai models for critical decisions such as those involving Significant resources or human welfare while there's a need for us to be competitive and familiar with innovation in the space We also have a responsibility to think about what impact this might have on certain communities and society So here are the takeaways and for all of these you can start today Timing is important as we all know so one you want to prioritize trust because Product management ultimately for ai is probabilistic and not deterministic Here as you've probably seen in a talk earlier today trust is easy to lose and hard to gain So it's important to prioritize consumer trust transparency and ethical principles when building these out The reason this is important is because machine learning adds even more uncertainty And the trade-off is that because of the scale That we're able to attain on machine learning There's also going to be a small percentage of predictions that are going to be incorrect And it's going to be really hard to understand why they're incorrect. It lacks its main ability This is because the ML code that has really seemingly Similar data sets of input output can give you wildly different results as an output sometimes So this is really serious implications overall for the product development lifecycle and also for software development Well, such as like versioning develop versioning and testing Because the data is never really as stable as we think so as your product inevitably involves your models that you've built Will also start to drift and need to be monitored and managed again tying it back to that risk management framework lastly There also needs to be that foundational governance framework or your Organizations teams to be able to take into practice So at a high level there needs to be partnership between community Legal and policy teams to build this governance framework and review at least quarterly Then mapping that to the risk management framework that we discussed earlier in terms of map Manage measure assign your stakeholders who's responsible for inputting these controls and monitoring that they're actually Being implemented and producing the desired results So i'm here basically today to chat about how we can all make an impact regardless if you're not a domain expert in ML I think with the advent of chat gpd4 We've really seen how it can become accessible to more folks and because of that everyone has a responsibility To weigh in and carry out this ai governance framework Yeah, thank you for your time Awesome. Thank you so much DC. So I think we have a few questions in chat Liza asked them earlier, but I think we may have covered it What are some of the uniques risks associated with ai technologies that traditional frameworks may not adequately address? So don't know if you wanted to add anything else in that but we also have a few others Yeah, I was wondering if you could restate the question. I was looking through the chat. So my bad No worries. Um, what are some unique risks associated with ai technologies that traditional frameworks may not adequately address? Yeah, so I think it really goes into that monitoring portion that I mentioned towards the end where With generative ai those like input output pairs that you have initially they can produce really different results As the outcomes and it's really hard to understand why because these models lack explainability So because of that you'll want to keep a human in the loop to actually review the results and monitor these results over time There's a concept called like model drift where these inputs and outputs can change As product managers, I think we're used to this problem in a different capacity where you ship something and you're saying great I'm going into maintenance mode and I'm going to listen to what customer feedback there is and make sure that Everything's performing as expected. This works for a more deterministic type of items where you always have a certain set of like outputs, but for These generative models that is not always a case. So you'll need to keep a closer eye on monitoring that Awesome. Perfect. Um, thank you so much. Um, the next question Is there an example you can share of a specific ai backed product that can be developed? Or or or commented by walking through the NIST framework? Um, yeah, that's a good question. So let me think about this. Um Probably, you know, taking that like text to image example because that's Probably one a lot of impacts since so many different users just across the world have access to chat gpt Or dolly either of those types of generative models So I think one really big question that comes up for me again is that copyright We've already seen these rulings from the us copyright office on ai generated works not being protected But at the same time companies like mid-journey are still providing services. Um, There's probably stuff that their teams are already working on to address these Claims, how can they protect their value chains? Can they protect partner of creators to manage? How that ai is being used and documenting that but Yeah, I would say that those specific examples are also dependent on your specific organization Awesome. Thank you. I've got another couple questions and a couple more minutes So what steps can individuals take to understand the risks associated with ai technologies? Yeah, that's another really good question I would say just start getting familiar with the test out chat gpt. I talk to chat gpt every day And also be careful about what data you're putting into chat gpt For example, would you be comfortable with what you're copying and pasting to end up on like the headline? of a newspaper for example I think that's where folks can start. Um, if you're not really familiar I'd also recommend taking Basic like Coursera courses if you want to get a little bit more deeper into understanding more of the underlying technology And then lastly, this is probably the hardest part is being an advocate for the the organization It's probably difficult to not feel like the only person that says no in the room or wanting to slow down the company When you see so many other Yeah, so many more of your industry peers wanting to move fast But I think it's important because of the powerfulness of ai to understand what are the risks associated with it and Showing that like you can have a voice too, even if you're not officially a part of like your policy team Awesome. Thank you. Okay. I've got one more question to finish What are some of the ethical considerations that organizations must take into account when using ai and how can they ensure that they are not inadvertently Creating biases and discrimination Yes, lots of good questions from this crowd Honestly, we had more time just to talk in person on the panel or something about this, but um There are certain frameworks that are published already by larger tech companies Like for example, microsoft has an office of responsible innovation that would be a good resource as well As far as perpetuating those biases and discrimination It really comes from one building a diverse team of those backgrounds And making sure that your organization is supporting putting resources headcount Financial resources towards supporting on those teams as well awesome Oh, sorry, um, and then someone has said are there any particular Coursera, I can't I'm gonna butcher the pronunciation. Um, but I'm sure you know No, I'd say you see courses that you'd recommend. Yeah. Thanks, Ruth. Um for that one. I'd recommend anything by Andrew Ng Um, I found his course is really helpful. Um, if you need it, it's all that's just Andrew a and d re w last name and g