 Thank you for watching SuperCloud 5. I'm Hawaii Xu, the guest host here. I'm the long-time AI and the data executive in Silicon Valley. This one is actually the third installment I have to talk to generative AI experts. In the first installment, I talked to the AI leaders from Microsoft Google Salesforce. In the second installment, just a few minutes ago, I talked to the entrepreneurs, founders, CEOs in this generative AI space. And then this third one, I'm talking to my distinguished panelist, Rob Toast. And he's the partner at Radical Ventures. I'll let him introduce a little bit himself and a brag about what he has done in the generative AI. But he just recently gave a talk at TED AI 2023. Very impressive one. Maybe you should introduce a little bit about yourself first. Absolutely. Well, I'm really great to be here, Hawaii. Thanks for having me. Looking forward to the conversation. Just a brief background, as you mentioned, I'm a partner at Radical Ventures. I lead the firm's Bay Area office. Radical is a totally AI focused VC firm. We have close ties with a lot of the world's leading AI researchers like Jeff Hinton and Fei-Fei Li, who's a partner at the fund. We're totally early stage focused. We primarily do seed and series A investments. We occasionally will incubate companies where we bring them together, help them figure out what they want to build and help them get launched. We did that with Cohere a few years ago, the large language model company. And we've done a couple more incubations since then. And it's been a very exciting time in the world of AI this year, obviously. So I'm looking forward to discussing it with you. So Cohere is one of the large language model players in this space, right? So just a week ago, we saw the, you know, some chaos in Silicon Valley, right? You know, the open AI drama or whatnot. So that drama seemed to be actually over, you know, for now. But what do we, you know, as an entrepreneur, as sort of the people in this practice, get to learn from this thing, right? You know, can you comment a little bit about that? Yeah, yeah, it certainly was a dramatic past week or so for everyone. I guess I would start by saying, as you said, I think the way things ended were a lot less disruptive than it was looking like things might have been. At the end of the day, Sam and Greg returned to open AI. You know, there will be a reconstituted board. But, you know, ultimately, I think not as much changed as it was looking like it might. You know, over the weekend when Sam and Greg were talking about leaving to do a new startup and then, you know, kind of preliminarily talking about joining Microsoft and having the whole open AI team join them. I think if one of those alternate realities had played out, I think it would have totally turned the AI landscape upside down. As things stand, I think things ended up settling in, you know, a fairly status quo place. So you guys incubated an open AI competitor, if you will, right? But you seem to be relieved to see the outcome. Why? I wouldn't say I'm relieved. And I do think, I think there are a few pieces that have meaningfully changed and at least are reflections that are worth talking through about it. So I think one is, it is incredible to Sam and Open AI's credit that when Sam was fired by the board and it looked like he wasn't going to be able to come back, like nearly every single employee of Open AI said that they would rather quit and leave the company than stay without Sam in there. So I think that level of loyalty is incredible and speaks volumes about the strength of the organization. And so I think in some ways Open AI will be stronger as an organization as a result of this. Having said that, I do think Open AI is the go-to provider of choice for a lot of companies that are just starting to ramp up their kind of AI and LLM journeys. And I think a lot of companies think about, kind of take the general approach of like, let's start with Open AI, we'll build an MVP, we'll kind of try it out. It's easy to use. It's straightforward. And then as we get further into our journey with language models, we will kind of branch out beyond that and maybe get an open source model like Lama 2 and fine tune it on our data or maybe start experimenting with multiple different large language model providers. And I think that's a journey that a lot of companies have already been going through this year. And I do think that this event, even though it kind of ended up in a reasonably solid place, like the instability I think has opened a lot of people's eyes to like, we don't want to be too deeply dependent on any one AI provider. And so I do think that, I mean, this has already played out and it's been... So this is a little bit wake-up call, right? You know, you're kind of a solely dependent, you are solely dependent on Open AI, but now you need to think, open model or other models, you saw that, right? Yep. Yeah. And I think it is a reality that a lot, hundreds of Open AI customers have gone to anthropic, cohere, other large language model providers as they start being more thoughtful around like, okay, how do we think about the supply chain of AI for our company? So this is kind of a whether bias or unbiased, you know, how far away is Open AI to the rest of the large language model players based on your sort of the understanding? Yeah. I mean, they're obviously, you know, an incredible company that's been pushing the boundaries in the field. I think the technology moves so quickly that honestly, I mean, maybe this is a little controversial, but I don't think that Open AI is doing anything fundamentally differently than what other players in the space are doing and not just cohere, which is in our portfolio, but, you know, RECA is another company in our portfolio that's building large language models. There's a handful of other competitors. They all are using the same fundamental architectural approach and they, you know, at least a small handful of them are staffed by top researchers that have come out of organizations like Google and Facebook and in some cases Open AI DeepMind. So I do think Open AI has the biggest team, is the most well-funded. It has done amazing work, but I don't think that there is an insurmountable gap and I think six months from now, I think a lot of companies may end up being closer to them than they are now. I do think like the current paradigm that Open AI and other companies are executing against is not going to be the final paradigm for AI. I think there will be new fundamental breakthroughs on the research side and I think that could shake up kind of the landscape and the ecosystem in terms of who the leaders are. So right after the drama closed the last Tuesday, right, this, you know, news about Q-Star surfaced, what's your take on Q-Star? Is that real? Is that something to be, you know, sort of people to pay attention or this is just one of the noise out there? I think it's a little overhyped, honestly. I mean, I think like Open AI is working on a lot of next generation research and I think a lot of it is oriented around how do we get models to reason more robustly, to understand causality, to, you know, have common sense the way that humans do and, you know, I'm sure I think what they're doing with Q-Star I'm sure is interesting work. It's worth noting that like literally nothing specific about what Q-Star was came out other than the fact that it's this new effort underway at Open AI and just based on the name there's been so much speculation on Twitter and so forth around what they're working on. Q-Learning. Yeah, the combination of Q-Learning and A-Star. I think like people want, like it's tempting to have this simplified narrative of like, oh, they figured it all out, they have AGI and it's called Q-Star and like that narrative sells but in reality I think it's one incremental piece of the puzzle that a lot of groups are working on and so like people, we don't know the details around what the program is but my expectation is that it's an important piece of research but there will be many such important pieces of research. Okay, now let's come back to the, you know, entrepreneur landscape, right? People who are building companies around Open AI, around the large language models. What do you see? Like there are lots of them, right? You know, what kind of, you are kind of talking to so many of them every day. What kind of traits are you looking for? You know, hey, this is actually worth my investment. Aside from the large language model crew, there will be, you know, half a dozen or a dozen or so but what about just the rest, right? Hundreds, thousands of companies you are seeing. Yep, and you mean traits in terms of the founders or in terms of the businesses? Let's talk about both. Okay. That's very good. Yeah. So on the founder side, I think it does really depend on where in the AI stack they are building. I think for companies that are building foundation models, as you mentioned, whether that's language models or other data modalities, the cutting edge research shops really are essential and it's, you know, it's funny. The entire world is going crazy about AI these days, but there still is a relatively small pool of individuals that really know how to build cutting edge models, you know, less than a thousand probably. And so you want to be backing teams that have those capabilities and that expertise and there aren't that many of those people out there. You said the four companies, you mean four startups or? In terms of the number of individuals out there that have the ability to build cutting edge models. At the application layer, I think the skill set looks very different. Companies that are building applications and products on top of other models, whether it's an open source model or open AI's model or someone else. I think in that case, much more important than differentiated AI expertise is domain expertise and subject matter knowledge. And so just to give a couple of examples, like I think there's an interesting crop of companies that are looking to build and, you know, this very, very relevant for you and Palo Alto Networks that are looking to build next generation cybersecurity companies that are powered by language models. I think the founder, like the ideal founder for that type of company is not like a hardcore deep-mind AI researcher necessarily, so much as someone who's a deep cybersecurity expert and really understands the product needs, the customer needs, what the go-to market looks like. And marry the data domain knowledge with the large language model. Yeah, exactly. How big a deal is the large language model understanding in that case? I think it still is, the technology is a big deal. I think it's unlocking possibilities and product offerings that couldn't have existed two years ago. But I think so much of that at this point already, you can abstract away as a founding team. So I think you, as an application layer founding team, you need to have competence with large language models, but you don't need to be building your own. And in a lot of cases, like a very good software engineer can learn how to use APIs and RAG and so forth without a lot of previous experience. So I think the amount of differentiated AI expertise you need really depends on the kind of AI company you're building. So the people who are able to do deep-mind, the open AI sort of the models thing, is not crucial in this case. Yeah, yep, yep. For companies like building a cyber, LLM-powered cybersecurity company or an LLM-powered legal company or co-pilot for doctors, something like this, where you're not developing your own models, it's really about the product that you're building and the underlying AI, you can kind of take off the shelf and plug and play. So that's the skill set for the entrepreneurs, right? What about, you know, what do you look for for the company, right, what they do? Because, you know, everyone was telling me that there are 50 companies doing X, 50 companies doing Y. Like, what are you doing to sort of, yeah, to see, you know, hey, this is what I'm interested. Yeah, totally. I think the first, and again, you know, we can set aside the sort of the core model builders, which I think is a different category, but I think one really important, it sounds simple, but one really important element that we look for is like what the company's doing should be really hard to do. Like it shouldn't be easy and it shouldn't be straightforward and it shouldn't be something that, you know, a few hackers at a hackathon over a couple days can put together a prototype that kind of approximates. So I think that entails a lot of times looking in areas that are less crowded, because you're right, there is just this incredible rush of capital flowing into AI startups and AI entrepreneurs popping up left and right and there are some categories that are very saturated. So I think it is important to think about what are areas that are less targeted have less noise and in large part because they are unintuitive and not necessarily easy to build but could represent dramatic paradigm shift. And then that's sort of not easy to do. Are you looking at engineering side or are you looking at some deep understanding of the, I don't know, data science or what kind of things are you looking for? Yeah, I think it can be either. So the cybersecurity example that I gave you, I think is a good one. Like if you think about, you know, you kind of hear the term used sometimes in a derogatory manner like wrappers around open AI or wrappers around an LLM. I think there are some wrapper companies that really are thin, like companies that are, you know, that help you create copy marketing or something like that. That's something that there just isn't a lot of defensibility around because it's easy to build. A cyber, an LLM powered cybersecurity company can also technically be thought of as a wrapper but it's so much thicker of a wrapper to use that parlance. It's so hard to build a cybersecurity product that will get adoption that I think that's an example of something that's difficult to build and therefore I can see some breakout winners emerging because it's not something that a lot of folks will be able to replicate. Another category on this topic of sort of hard to do, another area that we have been spending more time in at Radical is applications of AI and science in core science. So one kind of probably the most well-trodden subfield of this is applying AI for drug discovery and creating new types of molecules that haven't existed before and then shepherding them through clinical trials and getting them into market. And that's one area that I think the rise of large language models and generative AI over the past couple of years will be a game changer for drug discovery. But I think that's just the tip of the iceberg. There are a lot of other really fascinating kind of fundamental scientific endeavors that I think can be turbocharged by AI and turned into really exciting commercial opportunities. So AI for material science is another example. AI, generative AI for battery chemistry. These are all examples of companies that are very hard to do require deep subject matter expertise in that particular scientific domain as well as in AI. And as a result, I think are less crowded because there aren't as many people that are going hard after those fields. So you kind of agree with me that there are areas that are super crowded, but then you're also saying that there are areas not crowded. Like the drugs or the battery or those kind of the more fundamental research area that you feel like there should be more generative AI experts in those areas. Yeah, definitely. I do think that those fields will begin to attract more and more attention because the markets are massive and the opportunity for value creation is massive. And so I do expect them to become more and more prominent and attract a lot more entrepreneurship. So with that, there are areas that are super crowded and there are areas that are not as crowded. And then when I talk to my investor friends or entrepreneur friends, everyone is gung-ho about the generative AI gold rush. What do you think? Is that a bubble or are we not quite bubble? Like how do you see it? Yeah, I love this question. I do think we're in a bubble. I think that there is a lot of irrational exuberance in the world of AI right now. There are deals getting done at valuations that I think don't make sense. There are deals getting done that, frankly, I think will look silly in a couple of years. But what I would say is I think the core technology, the basic insight that the core technology is incredibly powerful and transformative and as important a breakthrough as the internet or as electricity and so forth. I think that is correct. And so I would say I think it's actually inevitable whenever there is a massive technology breakthrough that goes mainstream, it's impossible for there to not be some sort of financial bubble around it where everyone gets excited about it and people's enthusiasm and exuberance kind of runs ahead a little bit of what the technology is capable of. And so I think if you look at prior examples like the internet is a great recent example. The dot com boom and bust was there was obviously a lot of silly behavior, a lot of deals that didn't make sense, people kind of forgot about business fundamentals and a lot of money was lost. But at the same time, like the core enthusiasm for the internet and its potential wasn't misplaced in the subsequent 20 to 25 years, we've obviously seen that... So I think both things are true that we are in a bit of an AI bubble and there's going to be some deflation of that. But at the same time, 5, 10, 20 years from now, it will be the case that this technology is the most important technology of our generation. Interesting. So I've been in this business for quite a few years. Every time people ask me, I give the same answer. I said AI, now of course, generative AI, but even before that, AI is both over-hyped and under-hyped at the same time, all the time. I think it sounds like you see the same thing. You see the potential being huge, but people are making silly mistakes still. So great. So one last question about enterprise, because I'm an enterprise guy, there will be consumer-ish kind of the applications, but I care more about the enterprise. So the enterprise, we see that Microsoft 365 co-pilots went into GA November 1st. That was probably the first large-scale co-pilot out there, and then we don't see too many of them in 2023, right? One year after Chai GPT, why do you think we are not quite there a year later and then what's your prediction of the timeline? Yeah, yeah, totally. Yeah, I mean, I think it's a reality, as you said, that enterprise adoption doesn't happen overnight. It moves slowly. And I think this is one reason why expectations and enthusiasm have run a little bit ahead of reality from a business perspective, is that I think by this point, basically every large company, whether it's a tech company or not, has woken up to the fact that generative AI is going to be incredibly transformational, and they need to be thinking about AI. They need to have an AI strategy. Every board discussion is around this. Every CEO is feeling pressure from the board to do something in AI. And so I think basically every enterprise is thinking about it, tinkering with it, doing POCs, playing around with models and so forth. But I think there's still a gap between that and actually figuring out how the technology integrates into companies' operations and products at scale and how you operationalize it, how you systematize it, both on the kind of product side and how it fits into existing product offerings and then also just on the, I guess, what you could call the mechanics of it in terms of data security, data privacy, how do we do fine-tuning, how do we customize models that work in the way we want them to? And I think that process is underway and is, again, not going to happen overnight. Like, I think a year from now, there will be a lot more progress in terms of enterprise adoption, but I think it will be two, three, or more years until it's, AI has really deployed at scale across the enterprise. Okay, so two or three years at scale, that's your prediction. If not more, yeah. And today it's more like a tinkering stage, which is very consistent with the previous panel, you know, what the entrepreneurs are saying, right? You know, people are tinkering, we are doing integration, it will take time. Last 30 seconds. So we all have been in this sort of the generative AI exuberance stage for the last one year or so. What's a hard moment you had the last one year? Or you felt like, wow, I thought things this way, but now I change my mind. I think things very differently. Other than things you and I discuss, enterprise adoption will always take time, technology will always be overhyped and underhyped. Is there any one moment in the last 12 months you describe to me anything, a moment? Yeah, that's an interesting question. I think I would go back to our earlier conversation around applications of AI in science. And I think what one big aha moment for me just to give a concrete example is the ability today for large language models to basically learn the language of proteins, you know, train a large language model not on English or another natural language, but instead on sequences of amino acids. And the model can essentially learn the language of proteins, the grammar, the semantics of protein sequences, and then create new proteins that have never existed in the world before, have never existed in any organism on Earth, but that are tailor made to have specific structures and specific functions, and that you can thus kind of craft to be helpful therapeutics for human health and human medicine. I think that idea of using AI for drug discovery, not just to kind of search the existing space of molecules, but to actually create totally new, in this case proteins or other biological molecules that just have never existed before, but they can serve a certain purpose. I think that really opened my eyes to the fact that I think science in general, and in particular biology and therapeutics, I think just we're in the very earliest innings of AI completely transforming it. Great, thank you very much Rob. The potential of the genitive AI is still beyond imagination, that's what you see. Thank you for watching SuperCloud 5. I finished talking to both AI leaders of big companies, AI entrepreneurs, and today in this panel, Rob, representing the investor view. Thank you for watching.