 this weekend. So one of the most exciting, you know, boardroom dramas, I think Silicon Valley certainly has ever seen, and one of the most more exciting that we've seen in the United States in the last decade or so. This was pure, I don't know, succession or whatever you want to call it, or, you know, pure TV drama. This was quite exciting. And I'll send it around the hottest, sexiest, most interesting technology available today, which is open, which is, you know, artificial intelligence. It all started, I think it was fighting. When out of nowhere, there was an announcement that, you know, that the Sam Altman, the CEO of Open AI was being fired for lack of being open in communication, something vague and uncommitable like that. I mean, Sam Altman is a major figure in AI. He was the CEO and CEO of Open AI, but he was one of the founders of Open AI together with a number of other people, but he was like this part of the big shot in this industry. He is one of the main people who went to Washington, had in hand, asking for regulations. He is one of the people who on the one hand was saying AI is super, super, super dangerous, and on the other hand was pushing for massive increased development of AI. He was speaking out of both sides of his mouth, which we'll get to in a minute. Anyway, he was fired on Friday. A few hours after he was fired, his co-founder of Open AI, whose name is Greg Brockman, who was also on the board with him, also resigned in protest of the fact that Altman was fired and came out and said that he doesn't understand what the hell is going on here and that he would, he does not want to work for this company if Altman is not there. Then it turns out, and we'll talk about the corporate structure of Open AI in a minute because it's super interesting and none of this could have happened if it had a seen corporate structure, but some of its investors, Microsoft in particular has put in tens of billions of dollars, or maybe ten billion dollars, into Open AI started objecting what the hell is going on here. We invested with some Altman. What are you doing? But not just Microsoft, Sequoia Capital and other major venture capitalist firms in the Valley who had invested in Open AI started objecting, interesting and left, it came out, they're not represented on the board, none of them, not Microsoft, not Sequoia, none of them, which led me to think about the corporate structure which we'll get to in a minute. On Saturday it looked like Sam Altman was in the building of Open AI, so there was some speculation that they were negotiating to bring him back, and there was speculation throughout the weekend about whether he's going back, the interim CEO who Open AI appointed, she, Satya Nadella, openly said, I don't want to be CEO, Altman should be CEO, I mean, why was he fired? This is insane. So she, together with a lot of employees, were pushing the board to bring him back, he came to the building, he spent a few hours there, there was speculation about him bringing him back, but he wasn't. And then this morning, I guess it was this morning, maybe late last night, the board of directors announced that they'd hired a new chief executive officer, Emmett Shear, who is the former CEO of Twitch, the video company, I mean, well regarded, well respected guy, knows AI, there's been a CEO of a successful company, so Emmett Shear is now the CEO of Open AI, and then a couple of hours after that announcement, Microsoft announced, Microsoft the largest investor in Open AI announced, that they had hired Sam Altman and Greg Brokman, had given them basically what constitutes an unlimited budget to start their own research lab, advanced research lab at Microsoft, made them, made Altman the CEO of this advanced research lab, basically, I can only understand this as, to compete directly with Open AI, with which Microsoft has a deal and in which Microsoft has invested heavily. So what the hell is going on here? Now the initial stories that came out was, you know, kind of the CD regular stories when suddenly a CEO has kicked out, I don't know, he slept with somebody, he did something, he abused, he did something inappropriate, something like that, those were the initial rumors that were floating around, but as the day progressed Friday and then Saturday, it became clear that that wasn't it, and then discussions about having returning, it was obvious that that wasn't it, so what was going on? What was going on? And to really understand what was going on, you have to understand the structure of Open AI and how it was founded and created. Wow, this show is much longer than I expected. I'm glad I don't have a hard stop at three o'clock. All right, anyway, so, you know, let's up our goal as a consequence of the longer show. What the hell? So Open AI was founded in late 2015 as a 501C3. In other words, it was founded as a nonprofit, right, with a strong commitment to the quote, public good. They hoped at the time to raise a billion dollars for this nonprofit to work on AI, to be responsible in how it was developed and to make sure that it benefited everybody. In other words, we committed to publishing our research and data in cases where we felt it was safe to do so and would benefit the public. Their whole funding document is full of public good, public good, public good, right? Anyway, they couldn't raise a billion dollars. They only raised 130 million for this nonprofit. So what they did was, as, you know, it was clear that they couldn't do that, is they created a subsidiary, which was a full, and bear with me, I know this is technical, but it's fascinating, right? They created a for-profit subsidiary. The for-profit subsidiary would then sell shares to investors and could and did go public. But the for-profit subsidiary was, from a governance perspective, completely controlled by the non-for-profit. So investors who invested in the for-profit entity had no control rights, had no ability to control what happened with open AI. That was controlled completely by the board of directors of the non-profit. Now, the non-profit, you know, invested in the research and development, the non-profit also gave money to other non-profits. It was like a charity. And the board was by governance structure dominated by quote, outsiders. Now, originally people like Elon Musk were on there. Now that board is nobody of Elon Musk's stature there. There are people of high stature there. But all of them are committed to the non-for-profit, that quote, charity, mission of open AI. Not so committed and worried about the for-profit part of open AI, right? So in their on their website, they say the for-profit subsidiary is fully controlled by open AI non-profit. We enacted this by having the non-profit wholly owned and control a manager entity that has the power to control. Anyway, they structured it in a way. Because the board is still the board of the non- profit, each director must perform their fiduciary duties and furtherance of its mission, which is safe artificialized generalized intelligence. Put aside whether that's even possible. Safe AGI that is broadly beneficial, broadly beneficial, probably good. While the for-profit subsidiary is permitted to make, permitted to make and distribute profits, it's subject to this mission. That is, if the board feels that the AGI is not safe or as it's not going to be broadly beneficial, they can change direction. The non-profit's principle beneficiary, this is perfect, right? The non- profit's principle beneficiary is humanity, not open AI investors. Third, the board remains majority independent. Independent directors do not hold any equity in open AI. Even open AI CEO, Sam Altman, does not hold equity directly. This is stunning. Fourth, profits allocated to investors and employees, including Microsoft, is capped. All residual value created above and beyond the cap will be returned to the non-profit for the benefit of drum roll, humanity. Fifth, the board determines when we're attained AGI, artificial generalized intelligence. Again, by AGI we mean highly autonomous systems that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses. They won't IP it because, right? Because it's for humanity. And other commercial terms with Microsoft, which only apply to pre-AGI technology. So whatever rights Microsoft has is only to the pre-AGI technology, not the AGI. Now, the board, before this reshuffle, the board was composed of Greg Brokern, the guy who resigned, who was chairman and president. Ilya Syskeva, the chief scientist who played an important role in what happened. Sam Altman, who was the CEO, and non-employee Adam DiAngelo, who I think is the CEO of Quora, Tasha Macaulay and Helen Toner, who are tech people from the valley, right? Now, I'm going to speculate. I don't know this for fact, but I'm going to speculate that this board is a board of what you would call effective altruism, EA. These people are effective altruists. This is about humanity. This is not about profits, God forbid. There's a for-profit entity. Yeah, well, we need to raise capital. What can we do? We can't do it otherwise. But this is a board of effective altruists. I'm pretty sure Ilya Syskeva and the non-employees, Adam, Tasha, and Helen, all effective altruists, although I'm pretty sure that Adam Altman and Greg are also, although maybe less so now than they were. This is a board that basically it appears, came to believe that Altman, Sam Altman was steering the company in a for-profit direction too strongly and was pushing products too fast that were quote dangerous because of, you know, the dangers of AI that they all believe, that he was pushing the company to make money, pushing the company to do what the investors wanted, pushing the company to abandon, well, not explicitly, but implicitly in its functioning, the non-profit goals that Sam Altman was far too focused on. The for-profit side, for-profit goals, investors while the board was focused on humanity. And this is the essence of the split. This is a split in the effective altruism, in the application of effective altruism here. You know, this is inevitable. It's inevitable that when a non-profit tries to buy a for-profit, this will blow up in one way or another. And it has blown up. Most of the employees in open AI are now saying they want to leave. I think it's like 500 or 700 employees assigned to let it to that fact. Many of them are going to be hired, I think, by Sam Altman and Greg Brockman at Microsoft. Basically, I wouldn't be surprised if open AI is emptied of talent, which all moves to become a subsidiary of Microsoft. Sadly, that'll mean that Microsoft and probably Sequoia and other venture capitalists will lose the investment that they made, but they deserve it because they invested in a governance structure that was never tenable. You don't invest hundreds of millions of dollars in a business where you have zero, say, zero control and where the business is not being run for you as investors to maximize shareholder, long-term shareholder wealth, but for the benefit of humanity. So this is exactly the danger that altruism broadly affected altruism just being one species of the general trend. Altruism broadly, this is kind of danger that altruism poses. It is anti-business, destructive from a business perspective. It is anti-productivity. It is anti-markets. It brings around, ultimately, a decay and decline. And if businesses, if vast numbers of businesses run like this, the economy would collapse. So, wow, what a perfect story. You want to run a company for the benefit of humanity? Ultimately, you're going to fail. You want to run a company for the benefit of shareholders? Ultimately, you will benefit humanity. I can't think of a better illustration, or, you know, the millions of illustrations. This is a great illustration, not I can't think of a better, this is a great illustration of the lack of viability for all these so called nonprofit models, conscious capitalism, stakeholder capitalism, stakeholder whatever. You want to change the world? Make money. That's how you change the world. You change the world by selling products that people actually want, thus improving their lives. And, yeah, do it morally. Apply morality to everything that you do in life, including the business you own, the business you run, the business you manage. Do it ethically. Ethically as the self interest ethics. But, you know, this is building a company on the basis of this kind of effective altruism. Altruism, generally is always going to fail. My warning to the effective altruist movements out there.