 As we were planning this whole day, I was really excited as I saw the agenda getting put together. And then I saw that I was going to be moderating a panel, even more excited. Then I found out Charlie was going to open up, even more excited than I saw who was going to be on the panel. And then I read your bios. And I was like, I thought I was interesting. I know the work I do is interesting. And what I realized as I read your bios is that I think my LinkedIn is much cooler than my Facebook. So I'd love for you to share. I know your bios are in there, hearing you talk about your story and how you came to where you are now would be intriguing to the audience. So Zach, you want to start us off? Yeah, hey guys, I'm Zach Jones. I founded an education nonprofit right out of school. And one of the things that I realized in working with kids, we were doing a lot of workforce development and developing young people to create a problem solving. We did work with the Department of Labor and Department of Education and Division of Health and Human Services. And we kept facing this problem that it was really hard to onboard kids to a program like high school students who had never participated in a job or work experience. And we kept having these really difficult identity problems that took our focus away from like education and creativity and entrepreneurship with young people and just forced us to do a lot of administrative work just to try to collect documents so kids could get paid and join your program. And then you could kind of prove the impact. So all of these kind of processes around onboarding and verification identity were super challenging and like really kind of were a hassle that I observed across the nonprofit that we worked with. So as I was kind of looking at other opportunities and how technology can improve our life as humans and make the world more accessible, allow us to access the things that we need more easily, more seamlessly, I was learning about digital identity and verifiable credentials and blockchain and how all these things could potentially be helpful. That's kind of where I found Trinzik in my journey. And I wanted to move into technology because I felt like there was this opportunity to use technology to make the world better. And Trinzik was that opportunity for me where I felt like we could offer this digital identity solution that was both privacy enhancing and also very consumer friendly, made everything easier. You didn't have to constantly re-verify yourself from vendor to vendor, place to place. And I saw immediately how like doing this hands-on work with young people and high school students, how this technology that Trinzik was building could have helped in the nonprofit days. So it made me really compelled to kind of get into the digital identity space. And that's kind of how I got to where I am now. So yeah, hey everyone, thanks for having me here coming up from DC to get out of the DC bubble. There was some funny DC comments from earlier. It's good to be up here. I'm with National Institute of Standards and Technology. You may know us as the nation's measurement lab. And I work within the Privacy Engineering Program over at NIST, and I'm the NIST Privacy Framework Lead. Also do a lot of work around emerging technology and privacy workforce. But I came to this sort of tech policy space from the music industry that may have been the thing that you noticed first. I was also chuckling a little bit to myself when Joe was giving his talk because I was probably the poster boy for delayed adulthood. But part of what I was doing in the music industry was getting exposed to copyright policy and the kind of interesting challenges that exist there, particularly for artists and the tension between technology and how that's been helpful for the consumer and for the artists, but also interesting issues in artist equity. So I came to DC to do copyright policy and then open the aperture up to tech policy more broadly because I was able to see how all of these topics are kind of interconnected. And now it almost feels like it's gone full circle. I went into privacy kind of as a focus from copyright, but now with AI, and we've touched on this earlier today, there are significant copyright and IP implications that need to be taken into account. So it's kind of interesting to see all of that kind of coming full circle. So yeah, I'll be kind of talking about this through the privacy lens, but I think that a lot of the risk management things that we do at NIST are implicated in broader topics that we're gonna wanna talk about when it comes to innovation. Great, great, great to have you both here. And it looks like based on some of the words coming through, I think we're gonna hit on many of those. So let's kind of take the great starting point that Charlie gave us and stick there for a moment. And Zach, I'm gonna ask you around, when you think about culture, and I think labor rights come into play and worker wellbeing, obviously, any thoughts on strategies that can be employed to address those concerns from employees? Yeah, so at Trinsic, we provide this technology infrastructure that companies use across different sectors. And one of those is in the supply chain space. So one of the companies that uses Trinsic is called Blue Number. And they are basically a worker voice and worker rights platform, primarily in developing nations and where there are goods being manufactured, electronics being manufactured, and companies want to be able to make statements that their workplace is free of forced labor or free of child labor. Blue Number has kind of taken some of our technology and given workers a digital wallet a way to answer questions and surveys and attest to certain facts about the workplace that then could be verified and allow the company to make statements about their work environments. So there's this kind of dynamic of going from the ground up, going from the source of truth, which is the worker in the factory, and allowing them to kind of share information about the workplace. But because of the nature of the digital wallet, the nature of the technology that we use, it preserves the privacy. So the users are able to claim ownership of their data without necessarily putting their name on it and putting their stamp on it. So we'll get into this more later, but there are, you know, what's exciting about kind of this emerging technology space that we're playing in is the ability to prove things and prove claims cryptographically without having to share all the data. And that is kind of one of the things that Blue Number is tapping into. And then they've also integrated some other elements of compensation for workers who are kind of giving their data up. How do they get compensated for that? So they're incentivized to share information and not penalized for sharing information that maybe reflects poorly on their workplace. Yeah, that hits on what we heard earlier in several of the sessions around this notion of convenience factor of sharing data versus the privacy factor, right? So, and I think privacy Dylan kind of bounce over to you and on that, because that's your wheelhouse. And before we do though, I just want to pull up the second poll question on privacy. And this is just, we're going to see if you all can guess the correct answer here from a Pew Research Center survey. And I think, you know, based on what we heard before, you might get this right, but we'll see. So what percentage of Americans are concerned about the way that their data is being used by companies? Give it a second. It looks like 89 is the favorite right now. The correct answer, if you feel free to keep answering after I say it, but 79 is the correct answer. So, which is the vast majority, right? I mean, it's, and everything we heard earlier today, if you didn't believe this, listen to what you heard earlier today because that validates that, so. Well, but this gets to the sort of privacy paradox that Joe talked about on his panel, where, you know, folks tend to say that privacy matters to them, but sometimes maybe what we do, or customers or consumers do, may seem to run counter to that, you know, just sort of, you know, participating in our data economy in such a way that seems to be kind of, I don't know, hypocritical could be a word, or just sort of, you know, counterintuitive to what they're claiming. I think that there's been some studies on this. There's in the academic literature, you know, a term called learned helplessness. I mean, I think that we're all very familiar with the kind of endless stream of impenetrable privacy policies that we're expected to read and understand that basically serve as contracts with the things that we may do online. And there's an extent to which, you know, I think that there's also just simply an information asymmetry where folks may not fully understand exactly what's kind of going on under the hood. But I think the main takeaway from the privacy conversation is it needs to be a couple of things. First of all, privacy is complicated and everyone in here probably has a slightly different take on their privacy. Some folks like I have nothing to hide, take all my information, you know, my toilet behaviors from earlier, others that get upset about that type of thing and they wanna be left alone and they're kind of leadites by nature. We all exist somewhere on that spectrum. I was interested coming up here again, kind of being in the DC bubble, but also I work in privacy risk management at large. So I haven't actually had much of a chance to interact with the supply chain stakeholder community, which is great to be here. Hope to talk to some of you after this. But I was kind of thinking, well, you know, when I think of supply chain, it's oftentimes it's just sort of like, well, there's not a lot of personal data involved. There's certainly lots of data involved. We've talked about that, that incredible insights that can be generated, but going through these conversations, it became very clear that yes, of course there's personal data involved. There was an interesting, Julie mentioned this almost reverse supply chain where the customer, the individual is now the supplier, where maybe you're reselling, you know, goods that you own in your closet. My wife does that all the time and that you're now gonna be sort of supplying through a chain to another customer. There's gonna be a lot of data involved there. And then, you know, the sort of workforce component where there's gonna be, there could be ways in which an organization may avail itself of tools and technologies for worker efficiencies that could implicate data of workers. And that can create privacy risks. It can create cybersecurity risks. And AI has, it's a whole sort of panoply of risks, privacy and cybersecurity included, but there's safety components that are existing there. And so we think of it at NIST as a data processing ecosystem, as opposed to a supply chain. Our stakeholders were clear as we were creating the NIST privacy framework that this is a better way to conceive of it because it is so interconnected, so complex. And there are places in which there are going to be data points that can have privacy, cybersecurity and other risks implicated. And we have to be clear that this isn't necessarily just our standard kind of conception of like my credit card or things that have been pointed out earlier. Our behaviors, everything we do can be used to be generating rich profiles of who we are. Those that know advertising and marketing know this well. And there are gonna be an extent to which a lot of folks that doesn't matter to them, others that does, but organizations need to be thinking about as they're availing themselves of these interesting new technologies and trying to innovate on this, including through the supply chain, what are gonna be the risks that arise and how does the organization wanna manage those? So that's kind of like where I'm gonna be coming at this is that there are trade-offs and there's conversations that need to be had. And so we'll dig into that more, I'm sure. Yeah, I always do find it ironic that myself included and people complain about privacy, they're doing it typically by typing on their smartphone, which is your whole life on there. So, and actually Dylan, I'm gonna stick with you for one second because the sub-question to that Pew Research Survey that I shared earlier that talked about 79% of Americans being concerned about whether data is being used by companies. Underneath that, 64% of Americans believe that the government should do more to regulate the way companies collect and use data. So since you're here and thankfully you're here, that the shutdown didn't happen, so we're happy to have you here. Yeah, actually, that was definitely an open question. Yeah. So what should the government do? So, okay, I am at NIST, we're a non-regulatory agency, so I can't really comment too much on regulation or those types of questions. What I can say is that, you know, we believe strongly that a risk management approach is the right approach to doing privacy, but there are going to be situations in which there are safeguards that can be put in place through regulation. Regulation exists to address market failures, and so to the extent that there is a need to provide accountability mechanisms through regulation that can play a role. But I think another talking point that I'm going to be coming back to is that technology will always outpace law and technology. Our Congress can't even, we don't have a Speaker of the House at this point, let alone a national privacy law. Things go slowly, even in the best of days. Technology goes quickly, and so when we're thinking about innovating and the sort of social societal implications of this, we need to come back to ethics, we need to come back to, you know, what is it going to take to establish and maintain trust with your customer, with your business partners, you know, throughout the ecosystem? And so that's where, you know, being proactive, doing risk management is going to be sort of the more long-term, you know, effective solution in this view. And then government regulation is going to play an important role, it always will exist, but, you know, I don't think organizations can count on their, we constantly hear of like CEOs saying, please regulate us. So, you know, that may or may not happen, but in the meantime, organizations, you know, in order to do these types of things that we've been talking about in terms of building trust, you know, maintaining kind of like a healthy ecosystem, so to speak, are going to need to do the right thing, even if it's not illegal. So, and Zach, I think you've got some interesting things to share on the right thing, definitely not on the illegal side, but on the right way, to save, you know, as supply chains are getting more interconnected and data-driven, right? I think I'd love for you to share some of your thoughts on ways to safeguard information. Yeah, so I think that kind of the data exchange paradigm that we operate under at TRINZIC is that users are consenting to their data, and users own their data, and then they can permission it and share it with people in the ways that they choose, right? So if you show up to an e-commerce website and you own your data about kind of your preferences and your sizes and your habits, it might, it feels a lot better to share that with, you know, J.Crew, than it does for J.Crew to say, we have found this IP addresses search history and it looks like you're a medium. Like, that's really creepy, but at the end of the day, if you want a personalized experience, there are ways to go about that in a more user-centric manner where people can own their data, they can share it, and explicitly grant permission to it, because people want customization, people want personalization, but people don't want to feel like the company knows everything about me and knows, you know, my sizes and habits, even if it's true, right? We know it's true, we know that that data's being collected good or bad, but, you know, right now, people don't have a great way to kind of own their data and share it in a way that makes their lives easier. So that's a lot of the stuff that we focus on, and it kind of has implications when you start to think about, you know, supply chain traceability, certifications, things that kind of producers will have to verify about themselves and the origin of supplies, materials, things like that. Oftentimes puts a large burden of proof, kind of on like the smallest or the, you know, organizations. So we think about, you know, how do you alleviate some of that burden by giving people access to their data in a more, you know, standardized, easy to use format so that they don't have to submit, you know, 100 page long paper applications, but instead that they can carry with them, you know, a digital certification that's been approved by somebody and can be recognized across a network of different applications or platforms or ecosystems. So again, you don't have people redoing all this work, reapplying, recertifying. You have, you know, control of your data. You can share it with who you want. You kind of know what it's being used for at the time of that interaction. And I think that creates, you know, both like an internet culture and a purchasing culture and e-commerce culture that feels a lot better for everybody but still delivers these types of experiences that we've heard about today that are more personalized, more tailored to the individual without, you know, diving too much into the, you know, creepy data collection stuff that nobody wants. Yeah, I think one of the best sort of simplest ways to convey what you just said, I actually heard on GS1US Next Level Supply Chain Podcast. Shout out to the host, Ree Jackson, back there. There was an episode, I believe, a couple months ago that the example was, if I'm, and I haven't gotten carded in a while, but if I'm going to buy a Margarita and I get carded, they don't need to know my address. They don't need to know, you know, age verification, but I'm handing over all this extra info that doesn't, they don't need to validate my age. So I think this notion of zero knowledge proof is- Well, there's an interest, yeah. So in pets, privacy enhancing technologies are gonna play a role in a lot of this, you know, getting back to those conversations around how do we, you know, train AI models on proprietary data sets. We've got a lot of issues with, you know, just finding data generally, especially now that the, you know, the internet, which has typically been the source of data for large language models is getting polluted by its own data, by its own AI in an interesting way, as one of the panelists noted. So, but when we think about things like AI, it puts a strain on the traditional privacy principles, the fair information practices, for example, data minimization, because, you know, getting to the point of like, well, minimizing data would mean all I need to collect from your driver's licenses is this particular thing, but how do we handle this principle when as much data as possible is necessary to kind of feed these hungry models? And that's where I think that we have to be thinking about this strain on the traditional privacy principles is just further illustrating the importance of doing risk assessment and risk management. So you're saying, okay, we are gonna avail ourselves, we're interested in a new data processing solution for our supply chain, it may involve AI. So instead of, you know, privacy and cybersecurity teams and others coming in and being the party poopers, which they have been in the past, they think you come in and you say, let's have a conversation around what the risks are. And even before we do our assessment, we need to have this sort of enterprise level, high level conversation around things like, what are our values? What are our privacy values? What are our, you know, the things that matter to us? What's our risk tolerance? What are our compliance obligations? All these considerations that need to be kind of going into the analysis of a risk assessment, because it's gonna help you to respond to the risks in a way that makes sense for the organization. And so we have tools at NIST, the NIST Cybersecurity Framework, and this Privacy Framework, the NIST AI Risk Management Framework and a lot of other supplemental resources that we can think and help to support that because it has to be, it's an interdisciplinary cross-functional type of endeavor that requires, you know, good communication across the enterprise, folks having them not talking past each other. This happens a lot. I think that I'd be curious to know if it happens in the supply chain. It happens oftentimes in other, in other sort of sectors that I talk with. There's the folks that are doing the technology are not speaking the same language as the folks that are doing policy and governance, and that's a real issue. And so, you know, we try to provide the common language and framework to sort of support these types of activities, outcomes you need to achieve and the conversations you need to have that are gonna inform these types of decisions, because if you're just going sort of ad hoc with it, it can create a real issue, not only with sort of like compliance issues, but just generally in the efficiency of innovating and bringing things to market. So we've got some really great questions coming through on Slido. I'm gonna pull them up on the screen right now. I'm intrigued by, it's actually not there, so it's probably the, I'll shout it out because it's not up on the screen, but for Dylan, can you comment on the innovation on data and privacy that's coming out of the EU versus the US to the layman? They see miles ahead of us. So I think someone earlier touched on this. EU typically, we take kind of a different approach, right? US tends to be less regulatory in general, a little bit more sort of hands-off, more market oriented. EU tends to head towards a regulatory approach early. We see this with AI and they're working on legislation right now around AI. There's talks, of course, lots of hearings and conversations around what regulation may need to look like in the US, but typically, US is a little bit slower to regulate on these things. And so I guess it depends on your view of whether that is good or bad, having a prescriptive regulation or having a little bit more of a market-based approach, but so what's the comment? Do you comment on innovation, data privacy policy? So yeah, I can't comment on whether that's good or bad. But I can say that we do have a different approach, but I think, again, just going back to my point is that even if we get some sort of federal privacy regulation or we get an AI regulation in the US, technology is gonna outpace that and there's gonna be risks that are gonna be created that are going to be not regulated currently. So it's incumbent on organizations to be thinking about how they're going to treat those implications, what's their risk tolerance, how much do they wanna be thinking about the privacy of their customers and that sort of those types of conversations and then manage those risks accordingly, so. Got it, Zach, I got one for you on innovations and still I'm just talking through that. So how do you see the balance between innovation and the need for security with some of these emerging technologies that were discussed today? Yeah, so to kind of answer that and tie it into this, I think that one of the things that's exciting and something that we've seen a few companies start to address is that like we see up there opting out of data sharing is exhausting, trying to understand the term, like who really understands the terms and conditions of everything that you're accepting and clicking? Basically no one, right? But now we have AI, it's a lot smarter, kind of start to distill things. So we're starting to hear from people who are trying to deploy AI in a way that can give you a flag or a warning when there's something odd about terms of service that you might not normally see, for example. So those are types of applications that would not have been possible a few years ago and now are possible. The question of how do you productize that, how do you get it adopted is another question entirely but the actual kind of potential for you to have some sort of agent who can kind of wade through the policy, the terms of service is a lot more real now and we're starting to see some teams that are kind of doing some of these things where you have this agent that is helping you understand all these things that you've consented to, all these websites that now have access, what are the terms of service that you've agreed to? Because again, no human has time to wade through all of that stuff, especially we want some things, we want them quickly. We're trying to figure out the answer to a question, we go to a website and now we have to agree, agree, agree, agree, get me to my thing, I wanna order my food, I wanna get a scooter, I wanna get whatever I want. So I don't know, I'm optimistic that there are ways that new technology, large language models, AI can help us wade through this thing because I saw one of the comments there is it sounds like this benefits companies a lot more definitely, right? You can hide whatever you want on that terms of service because you know nobody's gonna read it. So now maybe there's a little bit of hope, a little bit of hope that humans can deploy something on their side that helps them get through the, wade through the legalese that. Yeah, I mean, there was a lot of talk about trust all day, but there hasn't been too much drilling down about like what do we, what does it take to have trust? Like what does it mean to establish and maintain trust? I like to think it's actually maybe gets overthought too much because we just think about like relationships with each other, like what is gonna have me trust somebody? Well, there's gonna be a few things first of all, like this person is gonna do the right thing even when they don't have to. They're an honest communicator, they're not trying to hide the ball on anything. And another thing that I think comes up a lot is like, I kind of noticed it a little bit in the sustainability panel, but this idea of, and also kind of stuff that Joe was talking about of like this skepticism among the youths around sort of performative virtue signaling amongst companies and corporations where it's sort of like, well, your actions aren't matching your words. So in the privacy context, that's we care about your privacy and then like, meanwhile, yeah, we can change these terms and conditions at any time. We're gonna collect your data indiscriminately. We're gonna do whatever we want about it, whatever we want with it. So I think that the really interesting question there, so how do you remain ethical if they're in a vacuum of sort of law or policy? And in this view, the way that you do that, at least in the privacy context, and similarly with cybersecurity AI is you want to start in the privacy context with the harms that data processing could create for individuals. And so we have a full catalog of what we call privacy problems. They're similar to Solov's taxonomy of harms that are privacy nerds. And so the idea is to recognize, is to identify the extent to which your data processing activity could create these types of problems to individuals. And then the impact should that occur and then decide how you're going to address that. Are you going to mitigate those risks? Are you going to ignore those risks? Ultimately, what that means then is you're gonna need to have a sense of where your customers are at on privacy, which can be easier said than done, but there does need to be open lines of communication there. And also there needs to be an understanding about where the community is on these types of questions. And so you do that risk assessment and then at least what you're doing is you're making a good faith effort within your organization to identify these issues that are happening as a part of your data processing and then responding to them as you want as an organization. But if it's gonna be ethical, it'll be in a way that is resonant with your customers and your business partners. So that's interesting, especially when you think about what Charlie talked about, gathering feedback and being able to kind of decipher what's really being said and aggregate and then react to it. In your framework and the privacy framework or if an organization is out there and they're working towards making improvements, how do they measure and track their progress at an aggregate level? Yeah, I mean, so there's a lot of talk around, what does it mean to have sort of a maturity model when it comes to privacy, to cybersecurity, things of that nature, it's an ongoing, it is an ongoing challenge for organizations. At NIST, we don't do maturity models in the same way that like Carnegie Mellon does, for example, what we have within our frameworks are what are called implementation tiers, which is basically a way to have organizational benchmarks that can help driving conversations around where your organization is now and where it needs to be on its privacy, cybersecurity, AI risks. And so, it is difficult though, like trying to be able to put a number on that, cause ultimately in some ways it is up to the organization. But what we try to do at least with NIST is provide some criteria around considerations for what it means to be at a given tier for your given data processing activities. But why we say that like in risk management context, you may not necessarily be wanting to think about it in terms of a maturity model is because as circumstances change, as your data processing changes, your products and service change, your compliance obligations change, that may mean that you actually don't need to be sort of as far along in your implementation of a given sort of outcome. And so, that's why we think of it more as this sort of dynamic looking at where you are now, where you need to be addressing that gap but revisiting it on an ongoing basis because the risk profile is gonna change. And also, there's gonna be trade-offs. The classic example is cybersecurity and privacy. You have to do a lot of monitoring activities for cybersecurity that can create privacy risks. Your organization has to decide how it's going to navigate that, or is the organization going to be sort of a, we're gonna go all in on this type of monitoring activities even if it may make our workforce upset that they're feeling surveilled, or maybe we wanna have an open dialogue within the organization about this. Anyway, the point is that those conversations need to take place, oftentimes they can be hard. And so, at least we try to just get the frameworks to help facilitate those types of conversations around trade-offs and risks. Makes sense. I'm gonna leave those questions on the screen so either of you wanna jump in on any of those. Just let me know. Zach, a lot of interesting innovation was discussed, say, emerging technology. Within your space, what are you excited about? What do you see coming down as a current trend or a future trend? Yeah, so for a lot, so think about identity on the internet. I think there is this general trend that users should own more parts of their identity and more parts of their data. We've traditionally operated in a very transactional identity interchange, right? So if I wanna onboard to a fintech application or even a job, I'm basically gonna go through some verification process and the fintech is gonna say, cool, Zach can onboard into this platform and that's it. And I think companies are starting to flip that model a little bit and say, if you were verified to onboard to this platform, you can actually carry that verification with you and that makes you a more trusted internet user across other platforms. You can reuse that verification elsewhere, right? So when you think about artificial intelligence, creating new media and new assets, misinformation, social media, one of the things that's really interesting and exciting right now is Adobe is spearheading this content authenticity initiative. And what the content authenticity initiative establishes is kind of an open set of standards that attach metadata to images and media that people can actually examine the source of that. So there's kind of a demo up on their website and a consortium of other folks from all the way down to the cameras that may capture an image at a protest for example. And you can basically examine where an image came from and how it was modified throughout its life cycle, right? So you can take, you can look at the image and see it was captured in one format. You can see that it was brought into Photoshop and it was edited to change the colors. You can see that it was then combined with another image and you can examine kind of the provenance of this thing, right? So when we think about like what we see on social media every day, there are some exciting innovations in this kind of content authenticity, content credentials that will hopefully give consumers a lot more power to interrogate the source of this material, right? And we've seen this recently where, you know, people may reuse an image from five years ago and pretend it was yesterday, right? And as a consumer, there's really nothing you can do to examine that. So it's still very early, but this idea of content authenticity and having a better idea of like where media came from is really exciting. The other piece that that ties into is you still need some sort of strong verified digital identity, right? Because at the end of the day, if I can claim that I am somebody else, that I'm a reporter for the New York Times or for the Associated Press and then somebody examines the media I created and it looks like it came from the New York Times but it actually didn't, that's still a problem, right? So you need to pair both of these things together. You need that provenance that tells you where the image came from, but you also need that strong form of identity at the very beginning that says like who you are. And that often comes back to, again, this kind of verified identity which may come from a government document that kind of establishes, you know, that you are your legal name and that you, you know, maybe have some sort of other authority or credentials that allow you to speak upon a topic. So definitely excited about, you know, the potential to kind of combat misinformation, give internet users more power to figure out, you know, where information is coming from. So I can address some of this universal opt-out question. So yes, there are laws. We have a very sector-specific legal environment in the U.S. when it comes to privacy laws. So you have things like banking and, you know, healthcare that have their own laws. When it comes to sort of more general consumer privacy laws, there is no federal law. There are a number of state laws, the most notable and kind of the prime mover was like a law in California called the CCPA that was then amended to the CPRA. We love a good acronym in DC. But one of the, they have requirements within the CCPA, CPRA for covered entities that you do need to have those types of transparency I think, you know, kind of transparency mechanisms in place when it comes to your notices to sort of be clear about how you're using your data. In particular, there is an opt-out within that legal regime and there is, they're working on regulations around a universal opt-out. And so these types of things are in the works on the regulatory front. And so, I mean, but it gets to kind of this idea of the data processing ecosystem and managing privacy risks around like a lot of SMBs have to rely on vendors. And sometimes, you know, you know, there may be a little bit of a lack of clarity around what sort of services are being provided, how it relates to the company. And so the company has to, the organization or the ecosystem has to be having the conversations around, you know, what its risk tolerances, what its privacy values are, these governance things I talked about earlier and then working, you know, to understand the vendor solutions that it has, what kind of contracts it's entering into that are gonna support its data processing activities and whether those align with the sort of privacy equities of the organization or its ecosystem. And so that is easier said than done and it's complex. And so, again, you know, it requires some tools that we provide. So I'd encourage folks to take a look at this privacy framework, cybersecurity framework, AI risk management framework. We have a privacy risk assessment methodology and we also have a catalog of privacy and security controls that it's all free, your tax dollars at work. So happy to chat about this if you want afterwards that over some drinks. But I think that, you know, again, the long story short is that these types of things are kind of coming into the fold now and, you know, they're slowly sort of, you know, crystallizing for organizations. Meanwhile, organizations are talking about AI. There's none of this type of regulation going on. So the organization has to say, all right, how are we gonna respond to these risks that are associated with AI? Which include privacy, but again, all these other things that we've talked about as well. IP, all these types of things. So I think that, you know, it's gonna be incumbent on organizations to do a little bit of the hard work of investing in, you know, the types of infrastructure that you need in place within your organization to do this kind of risk management. Easier said than done. But I think that what I'm encouraged by is hearing that a lot of organizations are adopting this and realizing that this sort of peer compliance approach to addressing issues that are related to technology is it's not working. It's creating a lot of sort of whack-a-mole. It's being reactive in ad hoc. It's spending money on things that aren't proactive and it's frankly stifling innovation. So, you know, that's one thing that we wanna make sure that we can kind of try to avoid. There was a question that came up twice in earlier sessions that was very similar. I'd love to get your thoughts on this. We opened up talking about your bios and your story and kind of how you both came to where you are, experts in your field. I was really intrigued earlier by the questions around what sort of skills should we be teaching at the high school level or the college level, folks that are interested in privacy or digital identity. So, thoughts on that? You wanna start? Oh yeah, so, okay. So yeah, I spent five years working with high school students like every day after school designing programs and things to help kids solve problems. So, things that we should be teaching, I think that, you know, there's a lot of great vocational training programs. The thing that we were really optimizing for and trying to help kids figure out is just like general problem solving skills, like understanding that you can identify a problem, figure out why it exists and then propose solutions for solving it. So, this is kind of like this general design thinking mindset of just having empathy, being able to clearly define a problem, ideate and then prototype solutions, right? So, there's, I don't know, kind of like a surprising or saddening like lack of that in a lot of the world and a lot of education, right? Where we actually would talk to kids about what they're interested in and they're actually interested in a lot of stuff. They're just not quite interested in maybe all of the things that they're being like lectured at in the school environment, right? But when you start to ask a kid, like what do you care about? What are you interested in? I kind of light up and start to talk about it. And it may not be the things that you want them to say, but they are the things that are real. Like they are actually really motivated and excited about things and they see problems in their community and in their lives and, you know, in their schools on the teams that they're a part of. And if you kind of give them a little nudge to start to solve those problems, they start to do amazing things. And I think they start to build skills that are like really valuable throughout the rest of their life and allow them to acquire, you know, whatever discipline specific skills later on in life. So again, there's a lot of great, probably vocational and technical training that we could do. But I think that like initial seed of like problem solving, innovative mindset, entrepreneurial mindset is very foundational to creating a workforce of problem solvers, right? So, you know, very, very beginning, we talked about how you're upskilling workers, you know, within the Walmart facility and you're upskilling them into higher level thinking of problem solving, not rote memorization and not just, you know, pick this thing up and put it over there. So that initial problem solving push I think could happen very, very young and there are some great, you know, schools and frameworks out there that are creating kids that are super curious and motivated and passionate. And then there are other schools that you go to where you see that that's very clearly lacking. So. Yeah, we're doing a lot of work on workforce, both cybersecurity and privacy workforce. I'm leading a public working group around privacy workforce where we're creating a taxonomy around what it means to have what we call a workforce capable of managing privacy risk. We're really keen on that framing around privacy and also cybersecurity because it's again, gets to this interdisciplinary cross-functional nature of these types of things. So we can imagine that, you know, it can be tricky calibrating. What does that exactly mean on the ground? So if you have someone, as we get more towards like reaching the supply chain into the home, for example, and you can imagine someone that comes to your door that maybe wants to do some kind of, they have some side of identity verification that involves taking a picture or something. Well, is that person sort of trained up on the fact that like they probably don't want to take pictures of someone else in the background of the house because there could be, you know, legitimate privacy concerns there. Other use cases in which, you know, as we move towards immersive technologies, there's just gonna be more and more of these kind of risks that are inherent in the everyday sort of operations of an organization. So we're keen on identifying the tasks, knowledge, and skills that are necessary to support these workforces. So it's a work in progress. I mean, again, this is still kind of a nascent stage, but to your point, it's really critical that that goes all the way down through the sort of education pipeline so that these types of skills are being taught. Some of them are gonna be soft skills and some of them may be more technical. So if someone wants to get into privacy, they don't have to be a musician and then go to law school and end up. I would argue, if you wanna get into tech, you should just do lots of different things because, you know, these types of social technical aspects come up time and time again in these conversations, right? Especially around AI. We're saying like, we need to have folks in the room that are ethicists. We need to have folks in the room that are psychologists coming from different backgrounds. I think that's just gonna continue to be more important. It can't just be people that are like pure, technical people because of these societal impacts that we are not gonna be able to avoid when it comes to the use of these technologies. I think we have time for probably one more question. I'll ask each of you the same question. This whole session today is about innovation, about the future of supply chain. So I'm gonna ask you to kinda jump ahead. Couple years, let's jump ahead to 2040 and if you could describe in three words or less the most optimized supply chain based on either what you've heard today or what you've heard somewhere else. What were those three words be and why? All right, I'll go transparent, trusted and human as the last one. We've heard a lot about transparency and traceability today. I think it's, you know, consumer preferences are changing, people wanna see where things are coming from, how things are produced, if they're safe, if they're using, you know, chemicals that may be harmful, et cetera. There's a lot of push for that. So I think the transparency is really important. There's a need for all that, you know, data to be trusted and verified and real, which we've also heard a lot about. Like the quality of data will drastically, you know, impact what conclusions we can draw from it. And lastly, human. I think like people, like we've talked about, maybe there aren't as many humans moving things, you know, within a warehouse, but there's this in-home delivery component and there's this more human service delivery piece. So, you know, we heard Walmart actually will have, you know, as many or more associates, not less because of automation. And I think that what people wanna do is connect with humans, right? Like I don't want my coffee from a robot, even though it might be cheaper. I want to talk to my barista because that human experience kind of grounds us back in reality, so. The quote from the associate in the Walmart video that was shared this morning, I'm part of the future of Walmart. That was such a powerful quote embracing technology, right? Not being afraid of it, so. Dylan, wanna close this up? 13 seconds, 2040, I'll be 60. So according to Joe, I'll be having my midlife crisis, which is interesting. Trustworthy, innovative, risk informed. We talked about trust a lot. I think that I wanna connect innovation and risk because again, there's an aspect of if you're innovating, you're sticking your neck out a little bit. You're taking some risks. You're not just playing it safe, but you know, in order to do that, you need to have a really clear eyed sense of what the risk environment looks like so that you're making an informed decision and knowing how far you wanna go when it comes to doing something new. So that would be my answer. Great. Please join me in thanking Zach and Dylan. Awesome.