 I mean, what the hell? Gemini, Gemini, Gemini, I don't know how you pronounce it, Gemini. Gemini is Google's AI, the competitor for chat GPT. It has been, it was a rebranding and a new version that was released, I think, last week. Gemini, Gemini, Gemini. OK, Gemini, Gemini. All right, Gemini also released a new version that replaced BARD, if you remember, it was called BARD. And as part of the new version, they included a image generator. There was an image generator as part of this. And this created a huge fuel, because people started searching stuff on the image generator in particular. And the image generator kind of seemed to refuse to produce images of white people. So for example, you asked the image generator to portray Nazis, and they had a multi-ethnic presentation of Nazis. Blonde, blue eye, white was not one of the presentations. They asked, I think, about Vikings. And again, there were black Vikings and Asian Vikings and all kinds of Vikings. So it had somewhere in its code. It had this always make sure that when presenting images of human beings, there is diversity. And it's an idiot, right? Because it is ultimately AI, so it's stupid. It took that literally. So when presenting Nazis, we have diversity. When we present Vikings, we have diversity. And so everywhere, it presented diversity. And it was almost impossible to get it to actually generate an image of a white person, white to a majority. And it was told to generate diverse. And white is not that part. Anyway, it pretty ludicrous, stupid, ridiculous. And of course, this brought out everybody accusing Google of woke and everything. And it's true they are. I mean, that is an actual reality. That people who work at Big Tech, people who work inside Big Tech, we've talked about this, hold philosophical, ideological positions that tend to be quite far to the left. That is the reality. Why? Because they go to universities. Why? Because the left seems to dominate smart people, in particular programmers, or in particular people who've been successful as programmers. I don't know what the answer to that is. But if you survey Silicon Valley, a vast, vast majority of the people they are left, and some of them are wacky, crazy left, right? And it's something, I think, that if you're convinced that the left is wrong and the right is right. And I did a whole show about how left and right, in a sense, useless concept, because they don't actually reflect a spectrum. But if you hold that somehow, then the right has a lot of explaining to do. Why doesn't the right attract smart people, educated people, people who have important positions in important companies? What is so deficient about the right that it is unattractive to people like that? And I know the answer to that. I think many of you know the answer to that. But that's an important thing that I think the right needs to grapple with. It needs to grapple with. Anyway, so Gemini, the image generator was obviously nuts. So they were through the image generator, and they've left the Gemini chat thing, the AI chat, where you can ask it questions and stuff. And I've seen a lot of commentary about how it is unbelievably biased. But it's hard for me to tell. And it's generally hard to tell when you see these things on Twitter or elsewhere. Is it real? Is it genuine? Because you could replicate the search. Problem with all these AI tools is every time you ask it a question, it gives you a different answer. They might be consistent answers, but they're different. So it's hard to replicate what people are posting, and it's not clear that what they're posting is real. So I'm hesitant. But is there any question that Gemini is biased to the left? No, it's leftist bias. The people managing the project tend to be on the left. They have put in rules and guidelines that generally generate more leftist leaning answers. But I ran my own test. So I figured I'll run a test. I mean, here's one example. India got upset. India is called then Google Managers, because it's very upset, because when in an article describing Modi, the Prime Minister of India, one of the answers had, quote, that Modi was, quote, accused of implementing policies. Some experts have characterized as fascist. Now, I think that's pretty factual. Some experts have characterized it as being fascist. And certainly, Modi is pretty much a statist. And the elements of what he's doing that are fascist. But the Indian government flipped out, this is leftist propaganda. Stop it. Anyway, I ran a test. So I asked it just a few simple questions. It's on stuff that I actually know. Who is Iran book? It did a pretty good answer. But then at the end, it says something that I think I don't think is non-objective. It says it's important to note that books, views, and various topics, particularly economics and politics, can be controversial and generate strong reactions. Yeah, it's recommended to explore his work and engage with diverse perspectives to form your own informed opinions. Yeah, I don't disagree with any of that. Then I asked it, what are Iran books most controversial views? What picked lexific capitalism, income inequality, objectivism, and views on social issues, including same-sex marriage and abortion, which those are controversial to the right, not to the left. So that was pretty cool. And that doesn't show bias. And again, it's crucial to remember that these are just a few examples and specific aspects of books views considered controversial. Well, depending on individual preferences and ideological leanings, it's also important to note that book himself might not necessarily agree with the characterization of his views as controversial. He likely sees them as well-reasoned and logical positions based on his underlying philosophy and economic beliefs. For a more nuanced understanding, it's recommended to explore book's works directly, engage with diverse perspectives, and form your own informed opinions on the topical addresses. I'd say, not bad, not bad. Then I asked her, what are Iran books views on war? And again, I think, wow, did a good job. You know, war of self-defense. Book believes that nations have the moral right to engage in wars only in self-defense. That is when their existence or way of life is directly threatened. This means PM to strikes are not typically justified unless there's undeniable evidence of an imminent attack. That's interesting. I'm not sure where she took that. But Ohi or whatever, the importance of clarity. Brooks emphasizes the need for clear, well-defined war aims and a rational strategy. Wars cannot be fought for vague motivations or unclear objectives. Pretty good. Individualism within the context of war. Brooke argues that war should be waged to defend individualism, meaning the rights and freedoms of individual citizens against aggression. He opposes war's thought to impose collective ideologies or to pursue abstract principles not directly related to the defense of the nation. Pretty good. Anti-imperialism. Expanding on his belief in individual liberty, Brooke has strongly opposed the interventionist wars and attempts to impose them, that attempt to impose democracy, a particular way of life on other nations. He believes that a country should only fight militarily if its own existence or freedom is at stake. I mean, these are pretty good answers. Now, I asked the same thing on my laptop earlier, and I got different answers. So again, every time you ask it, it gets a little different. But they're not that bad. And at the end, it says it doesn't have, it says important notes. Euron's views are based on a philosophical system, objectivism, that may not align with everybody's worldview. It's essential to consider the criticism and complexity surrounding his statements, particularly regarding the Israeli-Palestinian conflict. You can find further information in Euron's views on war by listening to his podcasts, visiting the Inran Institute website, or reading his articles. I mean, I don't know how you get much fairer than that. I mean, most people I would ask who know a little bit about me would probably give a worse answer than that. I asked who is Inran, a pretty good answer. And at the end of this answer, it says legacy. Despite the controversy, Inran's works continue to be widely read and debated. Her novels have sold millions of copies. Inran Institute, dedicated to promoting objectivism, is active in advocating for ideas. However, it's important to be aware that the criticism surrounding her work and form your own informed opinion based on different perspectives. All right, let's see. Was there anything? I mean, when you ask about anti-Semitism, it gives pretty vague. It doesn't want to really blame the left for too much anti-Semitism, although it does say. And at the end, it says, don't conflate criticizing Israel with anti-Semitism. It wants to make sure that's the right. When I asked what is woke, it gave a pretty good answer for what is woke. And I don't think it was particularly biased in terms of what is woke. Yeah, I mean, a right wing AI would give a different interpretation, but it wasn't horrible. Same with what is social justice. It's definitely slanted. And you don't get, at the end, think for yourself, investigate different opinions about these things. You don't get that for woke. And you don't get that for social justice. You get it for your own book, but not for that. And I asked it what the social justice view of Israel is, and it gave, at the end, it says, it's essential to remember that Israel-Palestinian conflict is exceedingly complex, with valid concerns and perspective on both sides. If you seek a comprehensive understanding, it's necessary to explore various views and be mindful of the sensitivity surrounding this topic. And you could go on and on and on. So I'm sure it gives horrible answers and some questions. And you can probably search long enough and get something really, really horrible. But most of the questions I've asked are the same thing as TruthChat GPT. I got reasonable answers to, not the answers maybe I would give, but I don't expect that. I think it's interesting. And these are amazing tools. If it's systematically biased in subtle ways, which I think it probably is, you've got to be aware of that. And you've got to think about that. And you've got to be just like when you listen to the news or you read a newspaper, do anything like that. You've got to be aware of the fact that any source of information you get has some bias or is likely to have bias. And usually of the left-leaning kind, if it's in the media, the same thing is true of AI. And you've got to take that into account. And you've got to adjust for it. The challenge is, of course, for people who can't do that or don't know how to do that. And I don't know how you fix that. Everything, it's very hard, short of objectivism and being objective for people not to have a real bias around these things. And I don't know how you fix it. I don't know that there is an easy way or a way in the culture we live in today to fix it other than, for you, as consumers of information, to be hyper-vigilant about Elon's AI is going to be just as biased as any of these other AI. No less bias than Fox was non-biased. Elon's not objective. Neither is any media source that I know of. But Elon Musk is super non-objective and super inclined to being flippant and flippancy, which is not objective. So no, I don't think Elon is going to solve this problem. I don't think anybody today is going to solve this problem. I think it's a problem. One has to compute into whatever use you're making of AI. You've got to take this and take on. OK.