 From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. Artificial General Intelligence, or AGI, has people both intrigued and fearful. As a leading researcher in the field, last July, OpenAI introduced the concept of super alignment via a team created to study scientific and technical breakthroughs to guide and ultimately control AI systems that are much more capable than humans. It refers to this level of AI as super intelligence. And last week, this team unveiled the first results of an effort to supervise more powerful AI with a less powerful model. While promising, the effort showed mixed results and brings to light several more questions about the future of AI and the ability of humans to actually control such advanced machine intelligence. Hello and welcome to this week's theCUBE research insights powered by ETR. In this Breaking Analysis, we share the results of OpenAI's super alignment research and what it means for the future of AI. We further probe ongoing questions about OpenAI's unconventional structure, which we continue to believe is misaligned with its conflicting objectives of both protecting humanity and making money. We'll also poke at a nuanced change in OpenAI's characterization of its relationship with Microsoft and finally we'll share some data from ETR that shows the magnitude of OpenAI's lead in the market and propose some possible solutions to the structural problems we think are faced by the industry. Now, last week, OpenAI's super alignment team unveiled its first public research with little fanfare. OpenAI unveiled the research that describes a technique to supervise more powerful AI models with a less capable large language model. The paper is called weak to strong generalization eliciting strong capabilities with weak supervision. The basic concept introduced is super intelligent AI will be so vastly superior to humans that traditional supervision techniques such as reinforcement learning from human feedback, RLHF won't scale. This super AI, the thinking goes, will be so sophisticated that humans won't be able to comprehend its output. Rather, the team set out to test whether less capable GPT models can supervise more capable GPT-4 models as a proxy for a supervision approach that could keep super intelligent systems from going rogue. The super alignment team at OpenAI is led by Ilya Sutskiver and Jan Lika. Ilya's name is highlighted in this graphic because much of the chatter on Twitter after this paper was released suggested that Ilya was not cited as a contributor to the research. Now perhaps his name was left off initially given the recent OpenAI board meltdown and his removal from the board and then perhaps added later or maybe the dozen or so commenters were mistaken but that's unlikely at any rate. He's clearly involved. Now the question the OpenAI team put forth is essentially, can David AI control a super intelligent Goliath? And this graphic has been circulated around the internet so perhaps you've seen it and we've annotated in red to add some additional color no pun intended. The graphic shows that traditional machine learning models involve RLHF where the outputs of a query are presented to humans to rate them. You know, thumbs up, thumbs down. That feedback is then pumped back into the training regimen to improve model results. Superalignment as shown in the middle would ostensibly involve a human trying to unsuccessfully supervise a far more intelligent AI which presents a failure mode. For example, the super AI could generate millions of lines of code that mere humans wouldn't be able to comprehend. The problem of course is super intelligence doesn't exist so it can't really be tested but as a proxy, the third scenario shown here is a less capable model GPT-2 in this case was set up to supervise a more advanced GPT-4 model. Now for reference, we pin AGI at the human level of intelligence recognizing that definitions do vary depending on who you speak with. Regardless, the team tested this concept to see if the smarter AI would learn bad habits and get dumbed down from the less capable AI and essentially become dumber or would the results close the gap between the less capable AI's capability and a known ground truth set of labels that represents correct answers. The methodology was very thoughtful. The team tested several scenarios across NLP, natural language processing, chess puzzles and reward modeling which is a technique to score responses to a prompt as a reinforcement signal that then gets iterated toward a desired outcome. The results were mixed however and the team measured the degree to which the performance of a GPT-4 model supervised by GPT-2 closed the gap on those known ground truth labels and they found that the more capable model supervised by the least capable AI performed between 20 and 70% better than GPT-2 on the language, the NLP tasks but did less well in other tests. Now the researchers, they're encouraged by the fact that GPT-4 outdid its supervisor and believed that this shows promising potential but the smarter model had greater capabilities that weren't unlocked by the teacher calling it to question the ability of a less capable AI to control a smarter model. Now I'm thinking about this problem. One can't help but recall the scene from the movie Good Will Hunting. Please play the clip. Do you know how easy this is for me? Do you have any fucking idea how easy this is? This is a fucking joke and I'm sorry you can't do this. I really am because I wouldn't have to fucking sit here and watch you fumble around and fuck it up. Now we want to explore this in this episode a corollary to AI safety and that is the question of a possible supervision tax in AI safety. There are several threads on social and specifically on Reddit lamenting the frustration with GPT-4 getting dumber over time. A research paper by Stanford and UC Berkeley published this summer points out the drift in accuracy over time. And many theories have circulated as to why ranging from architectural challenges, memory issues and some of the most popular citing the need for so-called guardrails having an impact of dumbing down GPT-4 over time. Customers of GPTs, of open AI's chat GPT-4 pay service have been particularly vocal about paying for service which is degrading in quality over time. However, much of these claims are anecdotal and it's unclear to what extent the quality of GPT-4 is actually degrading as it's difficult to track such a fast moving target. And as well we point out that there are many examples where GPT-4 is improving such as in remembering prompts and fewer hallucinations. Regardless, the point is this controversy further underscores many alignment challenges between government and private industry, for-profit versus non-profit objectives, AI safety and regulation conflicting with innovation and progress. Right now the market is like the Wild West with lots of hype and many diverging opinions. Now something else open AI did recently which was even more quiet is it made changes to the language regarding Microsoft's ownership in open AI. Now in a post last month you may recall we covered open AI's governance failure. We showed this graphic from open AI's website. As we discussed this past week with John Furrier and theCUBE pod, the way in which open AI and Microsoft are characterizing their relationship has quietly changed. To review briefly the graphic shows the convoluted in our view misaligned structure of open AI. It is controlled by a 501C3 non-profit public charity with a mission to do good AI for humanity. That board controls an LLC which provides oversight and also that LLC controls a holding company owned by employees and investors like Kosovo Ventures, Sequoia and others. And this holding company owns a majority of another LLC which is a capped profit company. Now previously on open AI's website Microsoft was cited as a minority owner. That language has now changed to reflect Microsoft's minority economic interest which we believe is a 49% stake in the capped profits of the LLC. Now quite obviously this change was precipitated by the UK and US governments looking into the relationship between Microsoft and open AI, which is fraught with misalignment as we saw with the firing and rehiring of CEO Sam Altman. And the subsequent board observer seat, the consolation prize if you will that open AI made from Microsoft. The partial answer in our view is to create two separate boards and governance structures. One to govern the non-profit and a separate board to manage the for-profit business of open AI. But that alone won't solve the super alignment problem assuming super human intelligence is a given which it is not necessarily. Now this all underscores another set of data that we want to share which shows that the AI market is very much bifurcated. And to underscore the wide schisms in the AI market let's take a look at this ETR data from the emerging technology survey ETS which measures the market sentiment and mind share amongst privately held companies. Here we've isolated on the ML AI sector which comprises traditional AI plus LLM players as cited in the annotations. We've also added the most recent market valuation data for each of the firms. The chart shows net sentiment on the vertical axis which is a measure of intent to engage with the firm. And it shows mind share in the horizontal axis which measures awareness of the company. The first point is open AI's position is literally off the charts in both dimensions. It's lead with respect to these metrics is overwhelming as is its $86 billion market cap. On paper it is more valuable than Snowflake which is not shown here and Databricks with a reported $43 billion valuation. Both Snowflake and Databricks are extremely successful companies and established firms with thousands of customers. Now you also see hugging face is very high up on the vertical axis not nearly as high as open AI but significant. Think of them as a GitHub for AI model engineers. As of this summer the valuation was pegged at $5 billion Anthropic is prominent on this chart. And with its investments from AWS and Google it touts a recent $20 billion valuation which cohere this summer reported had a $3 billion valuation. You can see them in the chart as well. Now take a look at Jasper AI that's a popular marketing platform that has seen downward pressure on its valuation because chat GPT is disruptive to its value prop and doing so at a much lower cost doing similar things in marketing for a lower cost. You see data robot at the peak of the tech bubble in 2021 had a $6 billion valuation but after some controversies around selling insider shares its value has declined. You can also see here H2O.AI and Snorkel with unicorn like valuations and character AI which is a chat bot with generative AI platform recently reported having a $5 billion valuation. So you can see the gap between open AI and the pack. And as well you can clearly see that emergent competitors to open AI are commanding higher valuations than some of the traditional ML players. Generally our view is AI generally and generative AI specifically are a tide that will lift all boats but some boats are going to be able to ride the waves more successfully than others. And so far despite its governance challenges open AI and Microsoft have been in the best position. So we have many questions on the topic of super intelligence. Here are just five. So when you think about the questions around AGI and now super AGI or super AI as this new parlance of super intelligence and super alignment emerge first is this vision aspirational or is it truly technical feasible? Experts like John Rose, the CTO of Dell have told us that all the pieces are there for AGI to become a reality. There's just not enough economically feasible compute today and the quality of data is still lacking. But from a technological standpoint you agrees with open AI that it's coming and essentially it's a fate a complete. If that's the case, how will the objectives of super alignment AKA control impact innovation? And what are the implications of the industry leader in this case open AI having a governance structure that is controlled by a nonprofit board? Can their objectives truly win out over the profit motives of an entire industry? We tend to doubt it. And the reinstatement of Sam Altman as CEO underscores who was going to win this battle? Sam Altman was the big winner and all that drama not Microsoft. So to us the structure of open AI has to change the two the company should in our view split in two with separate boards for the nonprofit and the commercial arm. And if the mission of open AI is truly to develop and direct artificial intelligence in ways that benefit humanity as a whole, then why not split the companies in two and open up the governance structure of the nonprofit to other players including open AI competitors and governments? Now on the issue of super intelligence beyond AGI what happens when AI becomes autodidactic and is really a true self-learning system? Can that really be controlled by less capable AI? The conclusion of open AI researchers is that humans clearly won't be able to control it. But before you get too scared there are those skeptics who feel that we are still far away from AGI let alone super intelligence. Hence point number five here IE is this a case where Zeno's paradox applies. Zeno's paradox you may remember from high school math classes states that any moving object must reach halfway on a course before it reaches the end. And because there are an infinite number of halfway points the moving object never reaches the end in a finite time. So is super intelligence a fantasy? This Gary Larson graphic sums up the opinions of the skeptics. It shows a super complicated equation with a step in the math middle that says then a miracle occurs. It's kind of where we are with AGI and super intelligence today like waiting for Godot. Now we don't often use the phrase time will tell in these segments as analysts we like to be more precise and opinionated with data to back those opinions. But in this case, we simply don't know. But let's leave you with a thought experiment from Arun Sabramanian. He put this forth at super cloud four this past October. We asked him for his thoughts on AGI and the same we think applies for super intelligence. His premise was assume for a minute that AGI is here. Wouldn't the AI know that we as humans would be wary of the AI and try to control it? So wouldn't the smart AI act in such a way as to hide its true intentions from us humans? Ilya Sutskimer has stated this is a concern. Please play the clip. Let me give you a thought experiment, okay? Let's assume that AGI will happen in the next two or three years or AGI has already happened. Let's make it a experiment, okay? Now if and when it happens, if it is truly AGI, won't you think that it will know that the natural intelligence beings would get freaked out by the AGI? Indeed. If that is the case, it would act in ways that will make us believe that. Would act dumb. Dumb. So. It would hallucinate. If you follow the thought experiment, we either are already living in a world where AGI has been around, or it's so far out there that. The only thing about right now is the matrix, neuro-pumping. Exactly. That's why I posted this, that this is a personal opinion, but just follow the thought experiment and we'll be in the circular logic of if it will happen, it's probably already happened. If it ever happens, we'll be the last ones to know. Okay, the point being if super AI is so much smarter than we as humans, then it will be able to easily outsmart us and control us versus us controlling it. And that is the best case for creating structures that allow the motives of those concerned about AI safety to pursue a mission independent of a profit-driven agenda. Because a profit-driven motive will almost always win over an agenda that sets out to simply do the right thing. Okay, we're going to leave it there for today. Thanks to Alex Meyerson and Ken Schiffman on production and our podcast. Kristen Martin and Cheryl Knight help get the word out on social media and in our newsletter, and Rob Hoef is our editor-in-chief over at SiliconANGLE. Thanks to all. Remember, all these episodes are available as podcasts wherever you listen to search, breaking analysis podcasts. I publish each week on wikibon.com, which is being rebranded, thecubeinsights.com. And I also publish on siliconangle.com. You want to get in touch, email me at davidotbalante at siliconangle.com. All the prediction posts are coming in. We print them, we'll review them for our predictions post with ETR in late January. So send them over, if we use them, we'll cite you. You can always DM me at davilante or comment on our LinkedIn post and please do check out etr.ai. They get the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Research Insights, powered by ETR. Thanks for watching and we'll see you next time on Breaking Analysis.