 Welcome back everyone, Cube Live coverage here in Las Vegas for Innovate 2024, SaaS's event where everyone comes together to talk about policy, technology, customer, user behavior, new experiences. I'm John Furrier, host of the Cube with Dave Alon. I think we have Reggie Townsend, Cube alumni, back VP at SaaS for Data Ethics Practice, Amir Bogal, President CEO, Equal AI, doing some amazing work. Miriam, you are also on the National AI Advisory Committee and worked for Obama Administration, you're also CEO and attorney. Reggie, you're doing great work. We know you're pedigree, you've been on multiple times. Weird policy side of this game is really interesting. AI is booming, it's the hottest trend of AI everywhere. There's a lot of trust responsibility that's really mandated in. You guys have that in your pillar as a core guiding principle. Miriam, I want to go to you first. Talk about the agency that you're involved and you guys are involved in. Just so people know what's out there that's being developed so people can understand the groups that are working around understanding AI at a national level. Absolutely. Well, I'm very fortunate I get to work with Reggie in both of my hats and both very squarely fall in ensuring that trust is earned and developed for AI systems. The National AI Advisory Committee is a congressionally mandated committee that came out of the NDAA that required several things including that NIST created AI framework which we now enjoy as a really important tool to help level set on how to establish responsible AI, what the best practices are, as well as numerous other developments. Our committee was established to provide recommendations to the president and the White House on AI policy with a host of experts and specifically mandated that we have AI experts from academia, civil society and industry, which we do. Reggie, what are you involved in? What groups are you involved in, your groups? Yeah, so I sit, I actually chair the education and awareness work group but I also sit on the workforce work group so on education and awareness, we're very interested in things like AI literacy and making sure that the country, everybody in it has an opportunity to not only understand AI but has an opportunity really to enjoy some of the fruits of AI and that's where the workforce piece comes in, right? We recognize that any technology is accompanied generally with some level of disruption and we want to make sure that disruption does not occur at a rate that the country can't sustain. So workforce is really all about understanding how we make sure that that doesn't become a reality. I really wanted to get that up front on the way we get into the conversations because there's a great innovation opportunity for everyone with AI, at the same time, people are concerned. So is there a way people can get involved with some of these groups? Is it open to the public? Or is it, how is this structured? You might get a quick, I'm going to burn too many minutes here but I want people to understand if they want to get involved if there's an opportunity, they should know about it. Yeah, so we'll maybe tag team this one up. So all of us who sit on the committee were nominated to be a part of the committee and there was a whole vetting process and you know, so we're special government employees as a consequence. I don't know if everybody wants to go and work for the government or not. But in that sense, as Miriam said earlier, our task really is to advise president of the White House, the AI initiative office specifically and OSTP on matters associated with AI. Part of what we've done as a function of that committee, however, is in fact broaden the access as much as we can. So you can find out when we're meeting and a lot of times we have briefings and we make that available all online. You know, people can stream right in. In fact, if you're in DC, a lot of times people can just come because we know this has to be in everybody all in moment. And so it can't just be- So they consume the content. They consume the content and maybe- And participate in creating that content because we're looking to hear from people so that we can incorporate some of those thoughts into our recommendations. Yeah, exactly. As Reggie said, there's both the standard mechanism. So it's a federal advisory committee. It's governed under FACA, which is an affectionate term in DC to describe these committees that come with a lot of regulatory requirements. One is all of our meetings are live streams and open to the public. So people can send in comments in advance. They can come in person and ask a question. We leave time at each public session for that. And we issue reports and provide an opportunity for feedback. We have an email. All this information is listed on AI.gov But a part of what we did adding to our mandate is as Reggie said, it's so important to us that we hear from more voices leading experts and new experts, new voices, those we don't hear from enough. We want to make sure that they're incorporated in our advice, in our education and opening up the tent. And so we have had many panel sessions. We had one just yesterday. They're all online. They're live streamed and recorded so people can enjoy those as well. Great transparency. Thank you for sharing. You're very thoughtful individuals and thinkers. It's not like you're trying to control the world of AI. But there are people, big tech in particular, very concerned about regulation. Maybe that's a way to say, hey, let us self-regulate, which probably not the answer. What does history tell us about the impact of regulation? And knowing what you know, how do you think with this world of AI we should be thinking about regulation in AI? To be flexible, but at the same time, you know, to make sure innovation is there, but at the same time, create safety. I think if done right, regulation helps build trust. It helps support innovation. So first of all, building trust. You know, if people aren't going to use it, it didn't happen. We need to make sure that these are systems that people want to use so that it can be the democratizing force that we want it to be. So that it can upskill and broaden education and create these healthcare outcomes that are just so exciting. But we have to make sure, if we hear a healthcare outcome, if we hear that, you know, we've learned that dermatological screening can be 93% effective when used with AI. We've learned that from recent studies that costs can go down 50% and upgrades in healthcare offerings can go up 40% if we're using it right. But we have to make sure, is that true for all Americans? Is that true for all of our citizens? Does that have we tested to make sure that it's not just who's been tested in the clinical trial because that's often going to be predominantly men and predominantly Caucasians? Do we know that if a doctor is giving an indication to a patient that they're not giving a false negative because that person has not been part of the representative population that's been tested and trained for with that AI system? So building trust, making sure that there are safeguards in place can help build trust. And accessibility to your point. And that brings up another whole point of who can use it? Can the AI hear all of us? Can it see all of us? We've had several federal agencies come out and share the laws they have on the books currently are actually applicable to AI. In addition to the additional regulations that we'll need, when you say accessibility, the EEOC was one of the first out to say if you are using AI and how you're working with people in your employment systems, you are regulated by us and our civil rights laws. And they had a first joint historic statement with the Department of Justice saying, you better make sure that we can hear, that everyone can equally hear what you're producing on AI, see, and other forms of inclusivity because you've got the Americans with Disabilities Act otherwise that could be violating. So that transparency is key. That's usually important. I want to make sure to jump on the idea of regulation because I think people tend to have a visceral reaction sometimes. And if all the conversation, I'll just kind of slow down innovation and stop, right? We have rules, we have laws, they exist today. Miramol says, it's not like with AI, we're lawless here. There are a lot of laws that exist today that are already regulating AI, so we're really clear. I think what people are really afraid of is whether or not they will have permission to just run rough shot and do whatever the heck they want to do. No, is the answer to that. Yeah, absolutely. And nor should they, and nor have they ever been able to. So I think there's a little bit of disingenuous sentiment that creeps into that conversation. I think all of us mature adults realize that they are potential negative consequences that we have to mitigate. And all the quote, unquote, regulation conversation is about is trying to mitigate those harms so that we don't harm actual people like you and I. That's it, the end of the day. Well, I think that the visceral reaction is, oh, the government doesn't, they can't keep up with AI. Okay, so it's easy to say that. But at the same time, the example you gave in healthcare, I want regulation, where it gets kind of interesting, and I'd love your thoughts on this. If the government says to Apple, you can't game the app store, good for them. They should do that. But on the other hand, if there are certain regulations that large companies can actually afford to follow, but a small company might not have the resources, you understand, I'm sure you've heard this all the time. That's a tricky balance for regulators. I don't necessarily have the answer. I'm sort of looking for answers, and I think we evolve it as we go. I think part of it is demystifying. We're talking about regulation. It's such a big word, and it's such an unfriendly word. But if you think about where has it been helpful? Where has it helped been trust or spur innovation? Think about in airlines. People generally feel safe because they know it is regulated. There are rules. If you're driving, there are lanes on our roads, and for the most part, people stay within their lane. They know to stop at a stop sign, and not every street has the same speed requirements, but we know that there is a speed requirement. And so I think that's a really good model. Not every state, not every action, not every level of activity needs the same level of oversight. But if we're talking about areas that could be really dangerous, if we're talking about areas that are high impact, hiring, healthcare, et cetera, I think that's an area where we want to make sure there are clear rules. It's interesting. You both have that perspective, and I think that I'd love that standards for you because look at roads. You got to get the infrastructure done right, the foundation. And if you over build, over rotate, whatever word you want to use on things that are hypotheticals that might happen, you got to get the basics right. Rules of engagement. How do we do that? So I think that's good. And I wanted to get in on this because I think there's other things we can look at. For instance, the FCC, broadband, AI fits into that. The domain name system started in America and then became global phenomenon. So AI's got this global piece, right? So now we have global AI, geopolitical overlay to that. Really interesting kind of dynamics. I've never seen this before. How would you compare this challenge or opportunity with policy in the U.S. now, I'm notwithstanding the innovation opportunities, just to make it work? Like, is it, how would you scope the challenge? Like, hell apart, or like, doable, attainable? I think this is one of those things where you bite the elephant a bit at a time, right? First of all, start with the definition of AI, right? And I know we've had several of them here in the U.S. and there's been agreement between both sides of the Atlantic, at least now, on a clear definition. So you got to start there. I think we can debate the implementation possibility of the EU AI Act, but I think they did a good job in terms of at least approaching this from thinking about risk to say, we recognize we don't want to capture all things AI, that's not the point. But if you are going to execute AI in a certain domain that might have a certain impact, we should know about it, right? I think that's a reasonable approach to this for now. Now, the reality is right and wrong is always going to be a function of time. What might work now may not work five years from now. And so the ability to kind of look at that on a longitudinal, from a longitudinal view will be really important. So we're monitoring the AI over time, but I cannot overemphasize this idea of use case. And we talk about that a lot at SAS, right? It's like, AI is not all of, you know, AGI is not all of AI, LLMs are not all of AI. We do a lot of AI that has very specific use cases in pharma, financial services, you know, law enforcement and what have you. It's really important that we get those bits right, especially because they represent a level of risk that would be unacceptable if we got it wrong. If you want to go out and figure out your travel plans, okay, great, you can have some variability on that, I'm okay with that. But not in some of these, you know, highly impactful areas that, by the way, are already regulated, right? So we got to make sure we get it right. Well, I mean, equal AI, that's in the name. Love the name of, by the way, the company. My big issue is kind of along day's line is that the entrepreneurial opportunity in this country is what we've been founded on. And, you know, there's been talk with certainly, I've certainly said this on theCUBE and other events, you know, the rich get richer with the cloud and AI except at today's point. Scale is about size. And so I joke about the GPUs for America campaign, meaning get public service, GPUs, tongue in cheek, but really that's scarce resource, power. So as the infrastructure gets richer and gets bigger, it's going to be harder for equality for me to compete or do something entrepreneurial or provide a service, potentially. I bring it up only as a hypothetical, but that's a scenario that could play out. And again, things like broadband in rural areas was an area where that was mandated, that was a policy decision. I thought it was one of my favorite things Bill Clinton ever did, Farm Bill. And that was great, but it kind of didn't happen as fast as it could have. What's your view on the ability to move the needle faster to get it done right so that we could have the opportunities and symbiotically live with the big players? Is that a factor or am I overthinking this? No, it needs to be all hands on deck. I think the stakes are high enough that I feel comfortable that people are going to rise to the challenge. We will not have a sufficient workforce. We will not have a strong enough economy if we do not have more people ready to participate in our AI economy. And that doesn't mean that everyone needs to be a computer scientist, but it means you need some more AI literacy that like what Reggie was referencing earlier. I think the good news is, when you're talking about the international front, I mean, it's such an important point. AI doesn't have boundaries. So how are we going to operate in a false narrative where we're looking at certain domains to understand what the rules are? The good news is both for that and in thinking through Innovation General, we've had some significant steps forward. So we've seen some activity in Congress. We've seen a lot of work where they're trying to educate themselves with the Insight Forum that was a bipartisan bicameral effort as well over the fall. We've seen the president's AIEO, the executive order that was released in October that I believe was the longest in history with 150 action items assigned to over 50 federal agencies and quasi-federal agencies. I don't know if they like to admit that it was the longest. I don't think so. Keep people busy while the team is going on. It's air busy. Reggie, keep your day job, Reggie. There are deadlines coming up in the next week where, you know, 120-day deadlines where action plans are required. So I think they have at the Department of State, the Department of USAID, they have significant taskings there, but also we should note that it's been building on actions that have been underway. So it's not lost on anybody in the federal government that there needs to be collaboration in order for the rules to be meaningful. So there's the Trade Technology Council. There's several established entities, the OECD in which we participate as well as other government bodies and organizations through which we've been participating to make sure that there's alignment and clarity. So I know we're talking U.S., but we got to keep in mind, this is a global conversation, right? Now here in the U.S., one of the recommendations that we actually offered on IAC was that the White House support the idea of the National AI Research Resource. And what that is expressly designed to do to provide access to those small businesses, to those entrepreneurs because we recognize that there is a bit of an arms race to gain access to platforms. So that's one. Two, I think we have to also question the premise, and I think they're doing this in a lot of other nations, questioning the premise of whether more is necessary to be more performative. And there are actually studies to show that more AI or more data, more GPUs in order to reach levels of performance that are necessary. So what's your goal? Is your goal to build the next largest foundational model with billions of rows and parameters? Okay, that's one thing. But if your goal is to simply use AI to advance the needs of your business and you're not in that business, then that's something completely different. And so I think we got to kind of cut at this a couple of different ways and when you look at what's going on in a lot of different nations, they're cutting at it from the former, from the latter way, because they recognize that they're not getting in the foundation model business. There are a lot of wealthy nations and a lot of wealthy companies trying to get to AGI and they're trying to throw as many GPUs and as much data as possible. Are you suggesting that there should be, and maybe there already is, a conversation from government in terms of regulating that or at least having some degree of transparency around that? I wonder if you could follow up on that. I wasn't making that statement just now. We can have that conversation. I think in terms of purpose, let's have a broader social conversation and not just leave it up to a handful of bros as to whether or not we ought to create something that is cognitively superior, right? I think that's worth a conversation. Open and transparent has to be there because we are living in an era where the title of this session is never getting ethical horizons. That is a huge three-letter word, three words. Ethical is a big part of this, ethics. But that is happening. It's happening and there doesn't seem to be any kind of guardrails on that. Right, so again, this is a collective action issue. We can't just leave it up to a handful of tech companies and we can't just leave it up to a handful of governmental leaders either. This is why this idea of literacy is so important to me. We've got to make sure that people are empowered with information so that they can figure out what for themselves, if they are okay with the direction that we're headed, with things like AGI. Now, AI is much bigger than just AGI. Let's just say that out loud. And it's okay for them to question some of those aspects as well. But first, you got to educate people on what it is so that they can have a better sense for whether or not they're willing to accept how it impacts their life. And last bit, when you look at studies globally as to whether people are okay with AI, generally speaking, what you're starting to see is a rise in AI anxiety more than a willingness to accept AI. And that's problematic for folks like us at SAS. We're providing this product. We want to make sure that people eventually embrace it. We think we're doing it for the right reasons. But if they're just freaking out because they hear AI, that's problematic for us and everybody else. It's a huge problem. We've been hardcore reporting this because we've seen evidence, real evidence and examples of people and companies that have changed their outcome of their situation by having access to AI and knowledge to level up. There's a huge leveling up up to any one could get to a point quickly. You don't need to have many, many years of computer science. I think you mentioned computer science. Like you can be smart and instantly smarter. So I think access to AI is going to be the critical thing and it can't be one company. I think it's a really important point that you're making. I love this conversation. You can talk about products and stuff, but AI makes you smarter. Who doesn't want to be smarter? At the same time, I worry that if some rogue nation gets AI before we do, I think we're the good guys. Not everybody would agree with that, but that scares me. But I'm an AI optimist like Jog. It makes me smarter. Well, this is where we debate all the time. This is not the Manhattan Project as Vinod Kostler thinks it is. He said that publicly. I mean, how do you know? Well, because open source is already global and it's already happening. So for this to be a Manhattan project, we have to have a national cloud, okay, one, a national AI cluster. That would be national security based. Probably do, but AI is already open source globally. So we have to change the paradigm of open source. So that would be a policy question. How do you enable the growth of open source software? But it's how you apply that open source software, right? I want to see a solution. I don't see a solution. This is the debate of our time. Absolutely. And I think it's important for us all to take a breath and take a beat and say, there are just some things that we don't know. Yeah. I know. Now it's okay to hypothesize about some stuff. But to me, it's much more impactful to say we've got something that makes us smarter that we can apply today to people who could use this capability. So if we could take, I don't know, 25% of our energy and redirect it to people who are without today. So I was a couple of weeks ago in London talking to the folks at the Commonwealth. Well, the Commonwealth has a number of small, like 33 small states, but they've got like 2 1 1 1 1 billion people. So the 33 small states are just kind of the smallest of all of the nations that they cover. Yeah. These are nations that don't have the infrastructure to support a big data center full of GPUs and all of this sort of stuff. These are also the nations that are the less developed on the globe. And so who are we to be talking about all of this, you know, turning into different species when we're leaving people behind today? Today. So if we could figure out ways to help them embrace this technology to grow economies today, to me, that seems like a better use of our time than exploring, you know, the possibilities of destroying us all. Reggie, that is great vision. Your passion is awesome. Appreciate you for that. And thank you both for coming on theCUBE here. We're going to wrap you up. We'd love to follow up. Miriam, we'll give you the last word. Love your company. Give a plug for what you guys are doing. Give a quick highlights of where you're at, what you're working on, and what you're hoping to accomplish. Absolutely. At Equal AI for five and a half years, we've been trying to get people to focus on what it means to be a responsible AI actor. The hard thing is it's not clearly defined in national standards or international standards like we were just talking about. So in the meantime, we work with companies who want to lead in this space and help them align on what is a best practice. What does one do to make sure that they're being responsible and thoughtful and build that trust with their consumers? We also work with policy makers to help them establish what are the right guardrails exactly as you said, so that they can make sure they are spurring innovation and not inhibiting it. I'd read you a quick plug for you. I know, quick plug for what SAS is working on, the ingredients, the nutrition label you have. You call it the nutrition label for AI? What is that about? So these are model cards where we're able to evaluate the health of a model, any AI model, or SAS and Python a day. It's the equivalent of a nutrition label that we get on our food, but this time for AI. So get a chance to see levels of fairness and transparency. If a model is drifting over time, so we should all be able to evaluate the end goal. So we're going to call OpenAI the supersize model and we're going to have to go on a low calorie, smaller model. The Soviet free. That was a smaller model. I should turn out to be pretty agile. Yeah, SAS and Python and R is coming, yes? R is coming, yes sir. Miriam and Nidji, thanks for coming on theCUBE. Okay, we're going to leave it there. I'm John Furrier, David Lop. Are you watching theCUBE? We'll be right back after this short break.