 Welcome back to Think Tech. This is Think Tech Tech Talks. I'm Jay Fidel. Today we're gonna talk about what in the world is going on in open AI, but also in AI in general. What changes can we expect on chat, GBG, and AI in general with Professor Mady Parrot Miracorley? I get that right, yes I did. Thank you for coming on, Mady. Hello, everyone. Thanks, Jay. Thanks for having me here. I'll be happy to discuss all the exciting news around AI, open AI, and related use cases. There's so much going on. And sometimes you resent the fact the media focuses on one issue to the exclusion of others. But I tell you, from my point of view, and maybe from your point of view, too, I'm happy to see the media focus on open AI I want to know everything I can know about it because it's a tool that can leverage my thought process, my life, my activities, everything totally. So I don't mind when they just stream news at me all day long over AI, and certainly I don't mind talking to you about it. So let's talk about open AI because there's a lot of news about Sam Altman and all that. And it sounds like we really don't know the whole story, but what are the parameters of that story we should know about? I think this has been on the news for a while now. I think the root cause of that news that we observed is that open AI was formed as a nonprofit organization. And recently, we've seen that the valuation of the company has reached 90 billion. And the news by itself shows a divide in the organization, the nonprofit board, and a team that wants to accelerate evil of AI solutions faster, get to market and explore. And termination and rehiring of Sam kind of shows that divide that the board had certain concerns. Obviously, there are a lot of speculations that we don't know what happened. But the root of that division is a nonprofit board and tech savvy people that wanted to accelerate and develop AI solutions faster. The outcome shows that open AI had a co-founder that was bigger than the board. That's why they hired him back. And a lot of trust within the organizations and the co-founders that they could lead and develop solutions. Well, I think the message is that what we see in open AI, we see in the society as well. There is a divide. There are organizations, individuals that say maybe we should take our time. Maybe we should be careful. Maybe we should look at humanity aspect of AI solutions once built them. And there are the other side of the spectrum that want to build on top of this momentum, accelerate development of AI, build and discover. And I think we've seen that across the society as well, garment, public sector as well. And there are private sector, let's say we look at Google, took a different approach. They're more careful in terms of how they build, how they market, how they come public. But I think that a story kind of reflects that the nature of open AI as a non-profit can it remain non-profit? Are they gonna monetize, which we see there monetizing at open AI? But that divide kind of taps into what we face, maybe as a transition phase as a society, humanity versus technology, helping us be more productive. Yeah, I totally agree with you. It goes beyond just the technology. When people realize that it has a huge effect on the world, they inevitably they get involved. And I'll offer a thought I'm interested in your view of this is if open AI started out with a certain non-profit aspect to it, which it still is directly, then when they looked around and they saw all the other tech companies starting products that were profit oriented, they said, well, we can't really limit ourselves this way. We'll be eclipsed, we'll be backwater if we don't watch out. We have to get on board with the profit side of things also. So that must have created a certain tension. And I'm sure in the marketplace in general, in the sector called the AI sector, there's a tension, but inevitably it's like the forbidden fruit in the garden of Eden. It's so tasty and so profitable, you can't resist. You agree? I agree with that. I think that's one of the challenges that potentially open AI and Microsoft as one of the major investors in open AI has faced that. Seeing all the private sector and even foreign countries, China, tapping into this technology, making profit, building solutions that can be built on top of that. I think that is tempting and that potentially has contributed to some of the conflicts within open AI. But I think we should pay an attention to AI for good. This is gonna change the world and we have to pay attention. I think it's good to have organizations that don't monetize on it. At least for a short term, in terms of leveraging these algorithms, small businesses will have troubles. The reason it would have troubles, if you look at for instance, Bloomberg came out and they developed Bloomberg GPT that specifically trained on financial data. And I'm sure it would do massive predictions and type of analysis that you would need as an expert in that domain. But just training or fine-tuning GPT model over three days would cost $3 million. How could, who could tap into that type of training, right? These are large organizations that could spend a million per day for three days to fine-tune and build a novel product, test it, evaluate it. I think we need to have these nonprofit organizations that still they can generate revenue, $90 billion, but they can look at good use cases and AI for good. I'm hoping that open AI remains with that drawl and doesn't change the original vision of the company. But you mentioned China, and certainly I would say Russia must be involved as a state actor. So you get the nonprofits, you get the profits, the multinational profit corporations, but then you get the state actors. And the state actors have their own agenda. They wanna use it for war. They wanna use it for diplomatic relations. They wanna use it for control of their electorate, their public. And I guess I'm asking what is likely to prevail at least in the foreseeable future? On who will control? Who will make most use? Who will invest most money? Will it be the nonprofits? Will it be the multinational profits? Maybe the smaller profit companies? Or will it be state actors who have big plans? That's a good question. And I think we are in arms race. If you look at it, everyone is competing, right? China, Russia, United States, Europe, European Union is competing in other ways. They're putting regulations to control the situation. And if they manage to put the first regulations out there, then there's another significant influence that they have in the development of the AI. I think in terms of state actors, there are a lot of use cases that are new to us. Misinformation, malinformation, China propaganda, we've learned it. We know that is happening in my previous discussion with you, I covered that partially. But now there are new use cases that you can use deep learning techniques that are coming out, large language models, to implement cyber attacks. You can use that in manufacturing to innovate and create new technologies. I think that's the landscape that is important for US government to support existing technology companies like OpenAI and the emerging one to accelerate in this area. It's the best interests of the country to have these companies, whether for-profit or non-profit, because we want to be ahead of the game. So I think that is certainly a factor. But in terms of adversarial use, defense use, ethical use, it's very, very complex. And we need to be able not only to be ahead of that, but also to be able to globally regulate what you could do with these technologies and what is the situation that maybe AI could in the military defense domain can create a weapon of mass destructions. And I think those needs to be potentially regulated. Well, it's interesting, you raise, at least in my mind, the question of exactly how importable and exportable the elements of AI might be. In other words, say I live in the EU and I have a small AI company, it lives in a basement because it doesn't take a lot of space. The government may not know exactly where I am or what I'm doing. And I say to myself, this regulation that the EU is putting on me is a burden. And I could do better if I focused my efforts elsewhere, like in the United States. And the United States, I recall that in the movie Oppenheimer, recently very popular, the United States recruited a lot of scientists even from Germany and brought them over here for the Manhattan Project to develop the A-bomb. And so this could happen elsewhere, it could happen in the United States, China, Russia, they could recruit people from other countries to work on AI projects, whether it's weapons or maybe positive use, but more likely weapons, and bring them into their country, their national state efforts to build AI. Furthermore, the sell of people in the basement in Europe doesn't have to move to Russia, it doesn't have to move to the United States or China, it can do its work in exactly the same location and help efforts elsewhere. So it's completely multinational, it's global. Do you agree that we'll be more global going forward? I agree with that. And I think that adds to the complexity of really putting safeguards on one side and the other side accelerating the development. On the safeguard, it would be very hard to know which policy would apply. If European unions put certain policies versus United States developed regulations, it would be very difficult which one would apply in terms of jurisdiction. Sometimes a user might be in a different country, the technology might be produced in a different country. Also, the ability to know where your users are, so I think it would make that a lot more complex. On the other hand, I think I agree with you that we need to tap into the talents that develop these technologies. If we look at from an academic perspective, a lot of scientists, a big percentage of those are foreign nationals. Now, I've written that to national cyber directorates that we need to provide mechanism to keep the scientists potentially maybe even giving them green card and facilitating their immigrations that we could tap into their capacities, leverage them. And instead of having them leave the country, go to other places that they could build and develop these. And these are people that we trained them within United States. We used taxpayers' money, we funded them, we made them excellent in what they do in AI research. Now they have to go through that process and there might be opportunities elsewhere. I think we've done that as you mentioned with Defense Act. There was a time that we leveraged foreign nationals and accelerating it R&D. I think that's one area. The other area is domestic training. We have a lot of students undergrads. Even maybe earlier stage in high school that I think they need to get into learning how to take the advantage of these technologies, how they could build new products, especially that we are getting to this stage that with these AI solutions, we can do end user programming. We do not need to write code to the extent that we were used to write code. And that opens up the space for even someone that is in high school that has some ideas that they could build technologies. I think it's a competition in terms of talent, in terms of opportunities and in terms of regulations. Yeah, I recall a show we did on ThinkTech. This has got to be 10 or 15 years ago about nuclear scientists in the numbered cities in Russia that were dedicated to universities that were doing physics, nuclear physics. And after the Soviet Union fell apart, you had a lot of nuclear scientists sort of on the market and the US was concerned that they would land in a bad place. So the US had a fund, an NGO fund, but it was government money to recruit them, to buy them so that they wouldn't go to a rogue nation and work on nuclear science there. So it could be the same thing if the United States really cares about dominating AI, especially, you know, weaponized AI, we could follow the same pattern. We could encourage them, come here, work for our companies, work for our government in the same way as these fallen Russian nuclear scientists. And I think from a political point of view, from a foreign policy, from a global policy point of view, that would be the best thing we could do because otherwise we could have rogue nations take our or take other scientists, other computer scientists and AI developers and put them to work on the wrong things. You know, just in my ordinary daily life and getting the information that social media sends me and that's reported in the newspapers, I have spotted a number of programs other than open AI. I've seen that open AI now, you can get on a waiting list to pay $20 a month and have GPT-4, which I'm not sure how good it is, but they say it's much better. You can sign up for Microsoft Bing, now recently Microsoft O-Pilot. I think those have an advantage of access to the whole internet, which is really interesting and beyond current open AI GPT. You have Google Bard and I think Google has another one too. You have Facebook Llama, you have Amazon Bedrock and Q just came out, Q is sort of a wonderful name. You have Adobe, especially Photoshop, which expands your photos. You have DID, which does avatars that can talk from text to send it. You have my personal graphical favorite mid-journey through Discord and you have Leonardo, which is also very, very good. And every time I look maybe, that list doubles and every company in the world, including companies you never associated with AI, is advertising AI products. What is going on and how do I pick the products I would like to focus on and sign up for? That's a very good question. I think some of these companies that you mentioned, they were AI companies. They just came out very publicly, but if you look at Google, Microsoft, Amazon Shopping website, it was an AI company. It had recommender system as long as I remember, but now with these large language models, they can do very sophisticated things that was not possible 10 years ago in the recommender system. I think the list is gonna expand. A lot of larger organizations, because of their ability to fine tune and train these large language models on a specific data are gonna be able to deliver unique products in that landscape. There will be some generic large language models, but if you look at the cost of training them, retraining them, the emergent companies are gonna be large, large businesses that have that ability to train. I would not pick up a winner in this race. I think we're gonna see a lot of interesting things coming out and the company that comes out first, let's say open AI necessarily is not gonna be the winner. And in this competition, we've seen that while these major corporations almost did the same thing, but they were quite different. If you look at Google, it had a monopoly. They never claimed that they have a monopoly, but they had a monopoly. The stuff that they provided, they were so unique that Microsoft could not provide that. The same as Microsoft. I think we're gonna see that grow especially with these large corporations. But what also we are gonna see is a special use cases. Let's say it's a corporation that is in healthcare. They have access to all healthcare data that they could create a product that no one else has access to because of the data that they have. I think what would differentiate these companies and their suitability in terms of use cases is the underlying data. The data that Google has, Microsoft doesn't have, the data that Amazon has, Google and Microsoft don't have and the data that Bloomberg has, none of those corporations have. I think that's where massive fine-tuning would result in a very smart AI solutions. I think data here is gonna be the most valuable piece of the dilemma. And if you look at, even in innovation, if you look at, let's say NVIDIA is making chips. They have a lot of creative solutions, manufacturing practices internally. If they threw AI at it, they're gonna build a solution that could creatively design new products. And that's their own internal IP that has been used to train the AI ML system. For picking up a tool, I would go back and see who is building it, what is the underlying data that they have access that all the competitors don't have access to? And that would put me in the direction of choosing the right tool, right company for my use case. Yeah, well, to me, I would choose the jury's out, but I would choose a chat GBT program that will incorporate everything on the internet. I want it right up till right now immediately so that it's completely current and complete. And that would be really, really powerful. And that leads me to, just from your comment a minute ago, maybe. So if one of them has more data and the other one has less data and different data, do you see in the future a consolidation? This is scary, a consolidation of these companies. So for example, chat GBT has a certain amount of data, but it's not as much as Microsoft or Google or Amazon. And do you see them buying and selling and merging in order to merge their databases and their large language models so that one is the category killer? I know there are antitrust issues around that, brand new legal issues over that. But it seems to me that if you wanted to be a category killer, you have to find more data. And that means you have to find it in companies that have data that you don't have. And that means you have to consolidate and merge. You think that's gonna happen in our AI marketplace? I think we are gonna see that in AI marketplace. We also gonna see that it would expand into other application domain finance and et cetera. I think for that antitrust issue that you mentioned, I think we have to pay more attention to acquisitions. It's not about the software and interfaces and APIs that we see, it's about the data behind that could enable massive new use cases, monopolies that we didn't know before. And that would be a risk that I think we have to pay attention and kind of manage that risk. Federal commissions needs to get involved and kind of see what are the type of risks that we would be exposed to. I think those are some of the concern that we see in the society, what's gonna happen with the acceleration of AI. But all the data that has been collected through the internet, but also use cases that now we are using that data, there are some concerns, privacy security concerns that we need to tap into. Hmm. Yeah, so I'm just wondering about the interface too. Now I can go on to the pretty much vanilla interface of open AI, GBT and type my prompt in. Okay, it doesn't take a lot of skill to do that, but I noticed that some of the other companies that are adopting AI interfaces make it much more user friendly. They make a menu, a whole bunch of choices you can make, make it easier for you. Where do you think the interface, the sophistication or the user friendly interface design is gonna go? Will it stay sort of open AI type simple with just a prompt box or will it give you a series of choices on a menu? I think even now we see that open AI, chat, GPT, you have access to APIs, application programming interfaces, companies that build pretty interfaces actually tap to those APIs that make method calls and they pay open AI for any call that they make and if they exceed certain, there's a price model that they have to follow. So that already been monetized and I think off the shelf AI algorithms is gonna be the market that there will be major performers because of the cost of retraining and training that would provide those APIs. Smaller companies would pay to have access to those APIs. There would be some mid-scale companies that fine-tune these large models internally, they can spend $3 million, $4 million to fine-tune and they would expose their own APIs for specific tasks. Then you would see a lot of use cases, app developers, web developers, and emerging use cases that would purchase these APIs and for a price per call or whatever model that they agree to, that they would build those beautiful interfaces, easy to use interfaces integrated into your phone. Maybe you don't even see that the large language models are in your phone with your daily interactions, you get feedback, but that to some extent is accelerating right now all the scales, large corporations, small corporations and individuals that are building apps. Yeah, it reminds me of Steve Jobs and the iPhone and designing apps that anyone could use that would solve any problem with the apps and opening the market so that anybody could design an app and that has redefined apps on your phone for sure over the past, what, 10 years. So it sounds from what you say that you have this group of the heavy lifters who have the data, the large language models and then it would sell access to that to smaller companies, even really small companies which have an interface that's friendly and can buy access to the large language model from the big AI company. It sounds like that's where it's going and if that's where it's going we're gonna have a proliferation of these small apps companies with the menu that have access to this extraordinary amount of data. Is that right? That is correct and I think AI is gonna be the next software. Anywhere in the world that today you see a software I think we're gonna see an AI software sitting behind. If it's your car, your phone, your browser, search engine, shopping car, shopping software, trading software, whatever you look at your computer, AI is gonna sit there. I think that's gonna be the next type of software that we won't be working with. No, it strikes me and I think this is probably already happening that certain marginal products are saying, hey, we are AI, we're really hot stuff when it's not true and they're not AI and they're not paying the cost of access because the advertising now, you look at every single software product. I mean, I don't think there's any exceptions. They all say, oh yeah, we're AI, watch us go. Am I right? Are you seeing false advertising on this? Yeah, I think there's a lot of, everyone wants to catch up with the acceleration of AI solutions, so they claim and I think we need to be careful because we also don't know very well the consequences of it, harms. I know that recently there are a lot of organizations software engineering institute that CMUS created AI incident response team that we are gonna see things that don't work or there are ethical issues, biases, accidents with cars and technologies but financial decisions that those incident needs to be captured, reported, tracked and traced into actual model that caused the incident. It's exciting time. Yeah, well, one of the other things that I noticed and just looking over the field on the internet is that there are jobs and of course, people worry about losing their existing job because AI will replace them, no surprise there but that happens with any industrial revolution. However, also jobs being created and there's news about that and what I get is that if you wanna be a prompt engineer which I guess is designing the algorithms that take what the user wants of information about and making that go to large language and coming up with an answer. God, that's pretty sexy actually. What a job that would be. It's like programming on steroids. So you make the algorithm and for this they pay you 250,000 small arrows every year and this is a lot of bread. I think a lot of people would be enticed by that because that jumps over there whatever they had in mind for their career. I think it's true and if you're creative in terms of coming up with prompts and queries it's a very interesting job as well and a lot of results that you would get is dependent on the prompts. So I think that's why certain skills would be required there but I see that evolving as a new role and job in the market. So how do I make myself ready for that job? I think the more you use these techniques the more comfortable and the more skillful you become. I think we have to catch up. We have to make sure that we stay relevant with these technologies. What about other jobs? What about other jobs? Can you think of other jobs? Might even pay more? I think the way it impacts the silliness evolving I have mentioned last time I had a student that were worried about becoming a software engineer because they thought he's gonna take over their jobs. I think he's gonna shuffle the type of jobs that we have. Some may disappear and new jobs may appear and it's similar to the time that automation came in software came in and if someone working in an office was worried that the software or half of the team is gonna lose their job because the software would do that. The software just changed the way we worked. And I think that would be the same with AI. The jobs, some of them may disappear but new form of job would appear that we need to tap into those. And the goal is to make us more productive. Well, it sounds like a lot of people will get involved, a lot of creative smart people. And meanwhile, the basic technology is moving forward to so many reasons to invest in the large language model data, the whole internet. That chat GBT and all of those related search programs, prompt programs will be more and more powerful. And we talked before the show about the kinds of things you could do, the consumer. This is all good and getting really optimistic about how it can help us in our daily lives and how it can help government, how it can help scientists themselves and one of the things is you wanna determine good public policy, just ask it. You wanna determine a good strategy for foreign relations, just ask it. You wanna find out what you should do with your life today, just ask. What you should have for your next meal, it'll tell you. And it's kind of extraordinary that this is pervasive and ubiquitous and that when people realize how much they can ask, A, they're gonna use it and B, the industry is gonna provide more and more quality search results so to answer any question in the world. I mean, our lives, gee, it's a little scary, isn't it? Our lives, and not too far away. And what are we talking a year or two? Anything you wanna know, any guidance you wanna know, even personal thoughts, psychology? I mean, I have a problem in my relationships, what do I do? And it'll tell you, and then you'll say, well, no, you didn't get that quite right. And it'll say, well, let me tell you something else. And so you can have a conversation with it. I can see this affecting life on the planet, every culture, every country, every situation. Am I being too optimistic, maybe? No, I think that's realistic, it's gonna happen. And I feel we have to tap to the power of AI for public good, better garments, better decisions can be made, taxpayers' funds and money is going there, how we could improve better decision-making in garment, faster decision-making that things don't take for eight years or two presidents period to be implemented. I think that's one area that we could certainly tap into strategies, costs associated with those strategies. What would happen if I make this decision? What would happen if I don't make this decision? What would happen if I don't make this decision today? Let's say if you look at Maui's fire, what would happen if I don't make this decision today? What would the consequence of that? I think those type of decision support, large language models, not only they are capturing a collective knowledge that we as a human humanity, we had it, it was captured on the data on the internet, but also creative connections of this information that one individual cannot keep in their memory, he cannot think about all the consequences, but these machines can keep those all together and reason about it. I think it is realistic and we should think of good use cases that improves humanity. How do you prevent? We talked about this in the Paul Chung lecture. How do you put a lid on it? How do you put guardrails? How do you regulate? So it's not being used for nefarious purposes, for weapons, for destroying things and people and societies. How do you put those guardrails on? Can you build them into the large language model? Can you build them into the interface? Can you legislate these things or can somebody always get around it? I think with these advancements, there are new attack surfaces and threats that are happening that we need to be careful. Obviously regulations is good to kind of put safeguards what corporations can do because corporations are gonna go after maximizing capital and revenue on these technologies. And we've seen a lot of concerns that in terms of data, being shared, not having accountability, those are characteristics of corporations. We're gonna see those emerging. But I think it is important for us to understand what we need to preserve in terms of civil rights, human rights, privacy requirements and so on. So I think the regulations needs to grow. I don't think today we know all the risks and threats and harms. Last year, maybe around Maytime, there was a meeting in White House between all the CEOs of major corporations in US talking about the risks and harms. We need to invest in research first to understand those. We need to work with legislators to put safeguards that corporations are not free to use all this data in any way that they want. Users requirements, users, civil rights are protected and privacy is protected. But I think there is a gap there. That was kind of the source of a divide in open AI in my understanding, a non-profit board and the technology center team. But I think we have to have more discussions. We have to have a mechanism to capture the harm that is created by AI. Today we don't know it, but incident response team, databases, Homeland Security is planning to create databases around vulnerabilities in AI systems. I think having those databases, creating them, tracking them would give us a better understanding of how to regulate that. Yeah, but your comments make me think of two things. One is Maui. One of the problems in dealing with the Maui fires is that the government agencies that are assigned to approve permits, as in every island, are really slow and sometimes they're corrupt and sometimes they make mistakes, silly mistakes. And they do not, and this goes to your point about looking at the implications, they do not follow policies for the development of a larger area, like the whole city of the whole island, which they should do. And so I'm thinking if I were a department of planning and permitting official, I would be, should be very concerned that my job is gonna go away, because AI could do everything and rationally and honestly with due regard for the implications of every decision approving every permit and the permit would be evaluated and the results obtained in minutes, not years. So that's just one example that comes to mind when you talk about the Maui fire. We really need that. I wonder if anybody's developing that. The other question that comes to mind from this whole discussion, maybe, is preserving you, preserving you, because if I say that a lowly prompt engineer who designs software for the prompts can get paid $250,000, then there must be people out there who have heard your name and would like to have you in their companies. Furthermore, there must be people in Washington and elsewhere who would like to have you on their regulation boards. You're a hot property, Mady. And anybody in similar circumstances will be even more and more a hot property for industry and for government. So what do you say? What are your career plans? What would happen if somebody calls you immediately after the show and says, Mady, we need you. We need you for a million dollars a year. What do you say, Mady? So that's a good question. I think for me, it's very simple. I chose to be in academia, help kids, educate kids, be in public sector, work for state government. So public good of that technology is important. I support different agencies. I do lots of R&D contract, development contract, build solutions that is used in public or private sector even in academia. But I think helping humanity, helping the public sector, I think that would be a big part of it. There are a lot of my students and colleagues that are doing a great job in helping the private sector or helping them scale and grow. But I think AI for good, we have to pay attention. So we don't get to the situations that humanity becomes a slave of artificial intelligence and our life and movements is managed and decided by non-humans. Oh, thank you for that. And thank you for your service to the community. Really appreciate that. And I think you're at the inflection point where we all are right now in terms of a changing world, a disrupted science, a disrupted computer science where it's going to be different and it's like riding a wave. You don't know exactly what characteristics are, but you know you're going somewhere fast. Well, thank you very much, Mady. Really appreciate you coming on. I hope we can do this again soon. Thanks for having me. It was a pleasure. Take care. Aloha. Bye.