 If we all get to go in and have deity level experiences, somebody determines how you go in and somebody manages substrates that you enter and those are the powerful people end of story. Boom what's up everyone, welcome to Simulation. I'm your host Alan Sock and we are at the Transformative Technology Conference. So many smart people, so grateful to be here. Dan Fajella is with us now from Tech Emergence, the founder. I'm super excited to talk to you man. I'm pumped up. Glad to be here Alan. Yeah, of course. I really appreciate it. Before we talk about the role of AI entering business and government and how to best have that happen, before we get there, who are you? How did you become who you are today? Yeah, you mean how did I get interested in kind of the grand implications of AI in some way? Yeah. All right, it was a little bit of a funky path. So I began with an interest in well-being. So it dawned on me at some point in my late teens that things that are self-aware have a weight on a moral scale. And if that's the case, then you would want to understand what makes things, what constitutes well-being, what positive quality versus negative quality. I'm not like a hardcore utilitarian, but there was a strength to that. I was like, oh, that would make sense. If you could improve that net carnage of happiness, that would be worthwhile. And so undergrad and then graduate school, so I went to grad school at U Penn under Martin Seligman for positive psychology, understanding the constituents of fulfillment. The focus, Alan, was on skill acquisition. So I'm an MMA guy. You know, my ears are all messed up. Oh, damn. Yeah. So the way I paid for Ivy League graduate school was training fighters and doing a lot of competitions and doing a lot of seminars. Whoa. Yeah. And so I got the black belt, did the national tournaments and won some shiny stuff and did a lot of that jujitsu stuff to pay the bills. You know, undergrad and graduate school cost money, and my friends were making subway sandwiches and selling insurance. And I was like, man, I'd rather teach people to choke each other. You know, wouldn't that be more fun? So I went in with a concentration, not just on the study of well-being, but on what makes us learn faster. So what are the psychological models and what's kind of the neurological happenings that constitute performance in memory, performance in sport. For me it was combat sport, performance in musical instruments, you know, learning the violin or performing the violin, wherever the case may be. And while I was doing my master's thesis, there were rustles in the breeze 2011, like hey, man, you know, you're doing like the neuro of like learning. You know, the computer science guys are doing learning, but like with, they could supposed to be kind of the same thing. So people told me that there's this rough analogy of how they're doing computers now. And so by the time I got out of graduate school, the image net stuff with machine learning was just taking off. So we're seeing this human level performance in these different tasks. And I got out of grad school like, man, I might have studied the wrong stuff. So it dawned on me that it's likely that through cognitive enhancement, through artificial intelligence, this net tonnage of well-being, rather than teaching people and understanding the well-being improvement of hominids, that the expansion of sentience itself through non-biological means will probably be the great moral crescendo, the most important thing. So if right now there's, let's say, two cups worth of total qualia on Earth, if we could create substrates, like if we imagine what crickets have for total qualia, total creative experience, sensory experience, imagination, and then we imagine people, we see, I don't know, whatever a million fold-wise increase. If there's another one of those, because you only got a 3% genetic difference or five or seven or whatever it is between chimps and humans, right? But you don't have no spaceships with those chimps, you know what I'm saying? So if you take that up one more step, not even cricket to human to above, you just go chimp to human to above, you go that one step, whatever that is, whether we enhance our way to it, or whether we create it raw, wouldn't that be the most kind of important moral thing, whatever is post-human, whatever populates the galaxy with raw qualia, that shebang is sort of maybe the only singular event that humanity will be a part of that will have a ricochet morally for the universe. So this dawned on me at 20-something right out of grad school, and that got me into AI. Yeah, I like it. He's called it a moral crescendo. It's like you're like, yeah, chimp, human, what's next? How do we get to what's next? And why is it so important to have the conversations with companies and governments about how it gets into AI? Yeah, and I'll get into that as well if you'd like to. But yeah, that's part of the path there. And then what, as you started getting interested in it, what ended up getting you to tech emergence? Yeah, well, I was 24. So I started exploring this stuff at the end of grad school was 22-23. And I was like, what do I do with this? My whole plan was well-being in humans and understanding that and proliferating that. Now that that may just sort of not even hit the Richter scale in the big picture here in terms of the ultimate trajectory of intelligence and sentience itself molded over for a while. At 24, I decided, OK, well, if that was the big moral thing, then what do you want to do to make that event non-catastrophic? What are you going to do? I guess I could write books. I could do more TED Talk stuff and cool. But how do you really want to contribute? And I think probably what seemed most evident to me was what if there was a way to encourage and better inform an open-minded interdisciplinary conversation around what are we turning ourselves into? What happens after people? Because I think it's a non-provincial concern. In other words, kind of like global warming, it's like you can't have just one group of humans thinking about it. I think as humanity, we may have to say, hey guys, where are we going to take ourselves? Are we going to play the upload game? Are we going to allow ourselves to bow out when whatever is beyond us emerges? Do we want to sort of have some kind of a zoo scenario? And there's a million ways that that could go. But if facilitating a well-informed conversation was the ball game, well, wouldn't it make sense then to be a hub, to have the deepest insights on the implications and applications of AI now? So the hard capabilities of AI, to grasp the entire capability space in health care, in finance, in defense, and to be able to make an open and frank sort of layout of what's possible so that folks like the World Bank, who we do research projects for, just finished up one, or the UN that'll call on us to give talks, can talk governance. So we do the business with the business, but at the same time, I think the grand governance conversations have to involve what can it do? And so, hey, maybe that conversation would be worth fostering, and maybe. So applications and implications. Yeah. I like that. And I like how you gave all the different disciplines right away as well, health care, finance, defense. Then there's so many more past that as well. Yeah. So then they're calling on you on the government side, on the business side. Are you creating like a knowledge graph then? Yeah, so there's a whole bunch going on at present. So there's a map of best practices around the adoption and garnering of a return on artificial intelligence. I think for this technology to proliferate, people are going to have to understand what is the best practice for you. So we're actually in the middle of a research project there. And we're going to be doing in banking and pharma kind of a total capability map in the way that you're saying. That'll be updated every year to look at the level of traction of the different technologies, the integration and cost requirements to use those technologies. So ranking and scoring things in terms of their capabilities and eventually doing the same thing in defense. Right now, a lot of that is in building an ontology. We already have the ontology, but mapping out the various companies that offer solutions in that space. But really, the deeper research, which is talking to folks like Ben, who's been part of our previous polls, Gertzlund and other folks like him. Back five, six years ago when I first got into this, Ben was maybe one of seven people that cared about post human intelligence. But we also talk to the people in industry. So yeah, it's going to be the ontology and the startups is what we do now. Really tailored custom research for big organizations that need to make expensive decisions is a big bulk of what we do now. But mapping that capability space over time is, I think, the purpose that we could pursue that might, again, foster, which is what we plan to do, and inform that global conversation about where are we steering it. And right now, maybe it's developing drugs and saving lives and cool. But in the future, I think it's going to be what kind of brain chips do we allow in? And if they're doing it in Asia, are we OK to do it here? And is there a way to ensure that there's a non-arms race dynamic going on? So the transparency around the tech is also to kind of maybe let people put themselves on notice as to what they think is morally consequential and that we should maybe think about regulating. I don't think we're nearly the big game there, but people actually care about the little game. And so we're part of that convo until it gets to the next level. It's funny. That's a really good way to put it is that we're totally in the little game right now. Yeah, it's the little game. I mean, this is kiddie pool stuff. To be honest, I almost have to pretend to be interested. I mean, I'll do it. I do a good job at what I do. But at the end of the day, this is important because it leads to the crescendo. That's why it's important is because influencing the trajectory would be important. And right now, it's just getting on the radar, which is still important, but it's baby steps. And those baby steps, like you were describing earlier, they help with people's health or there's just meaningful impacts. There's real deal stuff. So as we get to the small game versus big game stuff, I also want to, what you just said there was really cool in your last part. You said that there's a ranking system that you do and then like a linking system of nodes. Like you connect. Yeah, yeah, yeah. I like that. You explain that. Sure. So in terms of the research that we deliver and the stuff that we'll do, regardless of industry, there's different facets and factors to consider about a given capability of AI. So if we talk about the capability, think about a recent project, banking and customer service. Retail banks have a lot of employees. They have a lot of revenue. And they have a lot of customers. And being able to improve customer service and reduce those costs is a big deal for them. And so if we look at the capabilities of handling customer inquiries, we're not just talking about email. Just the broad, look at the whole bubble. There will be different technologies at different levels of maturity. So maybe there's software that will purport to answer questions. But really for the bulk of applications is more just about properly routing incoming tickets to the right department and sorting, but not necessarily being capable of responding all that much. Doing the deep diving to know that that's the case is something that a bank would want to know before they spend money with a vendor. Because everybody's going to brag that they can get it all solved. But our job is to look within the different capabilities space spheres and say, how much traction do these applications have in terms of ROI that they've actually garnered with real companies? What does it take to integrate them? What kinds of data infrastructure do we need? And what's that going to cost to set up? And what kind of talent is going to be required to run them? And then similarly, how well are they getting picked up in industry in general? Sometimes there will be a technology that might not have that much firm ROI, but almost all the top players are piloting it, which means that there may be more credence to that application and more between fundraising and future traction, more promise for that in the near term. So being able to weigh those things is important before a bank goes and pulls the trigger on, let's hire these people, build out this department, and let's shoot for this goal to be possible in two years. It's like, well, do you want to see what the companies can do first? And most of them say, yeah, I guess we would. And so that's the business. I like that a lot. Now, who is your team that is able to parse what an AI company, how much ROI they actually have brought back? And what the data infrastructure? Requirements are for these different applications. So my technical advisor is a postdoc, MIT guy by the name of Marco Laghi, who's taught me as much feeble Python as I'm able to do, but got me the fundamentals and got me through Andrew and G's course. But we have folks like him and a cadre of other people who are of the academic PhD side. And then my team is analysts. So these are people whose job it is to assess case studies and to grill companies about those case studies to get to the facts and to determine what defensiveness from a vendor implies if we're not getting certain kinds of facts, not able to find them, not able to get the proper response during an interview. So the core squad is folks whose whole job is to think through the lens of an executive in ROI or the World Bank or the UN thinking about an end result. Not thinking about the code, but thinking about an end result. And then the PhD folks are the ones who would say, OK, here's what we found. And they'd say, well, from a technical perspective, I think we should interpret this way, this way, or this way. So we've got the folks that can bring in the hard math and the science, but the core team is just people who work through our frameworks to assess these things purely for a business perspective. So both parts are important. In a research product, a very expensive research product, both parts are always in play, but the core team is analysts. And the language we speak to the C-suite is not Python. The language we speak is money. Yeah, yeah. Interesting. I'm envisioning quite what, as the analysts are trying to find the data so that you can actually have. I'm imagining this process is difficult, like you were saying. Sometimes it's not transparent, and it's difficult to find. But this is extremely, people will pay a lot of money. Governments want to know who they're going into bed with. So do companies. They want to know who they're getting married to, et cetera. And what better way. Who has reasonable goals, too? Like they want to say, oh, we want to achieve this. It's like, well, if it takes four times longer than you would suspect, or it takes two times longer and eight times more money, maybe you should know that. So yeah, it's that, too. Yeah, reasonable goals. What capabilities will be accessible within the next two years? You might have ideas in the C-suite that they might be really bad ideas when you assess the lay of the land and the maturity of the vendors in the space. Interesting. Now, out of the conversations that you've been having with the governments and corporations, can you give us some that you've found to be most fascinating so far, whatever you can tell us? Yeah, yeah, sure. So in terms of questions from governments, do you mean? Or the most fascinating conversations you've had, or examples of your analysis of a prominent AI company being a really good fit for a company, or being a really bad fit in some ways, just to teach us about what the stories are. A lot of the conversations that I often will have the most fun with will be companies that have raised a lot of money, like $50 or $100 million, and are concerned with really growing and dominating a given space, whatever it is, drug development, or retail security or something. Those conversations are often fun because these are people that more than understand the business and the tech understand the business models around AI. And in this, I'm not going to say this is good or bad, but when we talk to VCs, we see a lot of the same thing. There is a unspoken understanding that whoever owns the data can just win, can just win altogether, and that that's the promise of AI. So you talk to enough VCs, no one's ever going to say, we'd sure like to build a monopoly, because that's super uncouth, like Silicon Valley's like a very politically that's not cool, like you're ostracizing. So no one's going to say it, but it's known that there will be no general search engine outside of Google unless they get destroyed, because they have the best product, they get all the users, they get all the data, their product continues to be better, there's no reason to use anything other than Google. The same would be the case in lead scoring for selling cars, or the same would be the case in data security in a certain sector industry, or the same would be the case in physical security applications for offices and business premises, or things along these lines, that someone will garner enough of a base of users tenaciously to inform their product, have their AI be so much more capable than the competitors that there's no reason to buy from anyone else. And so a lot of the conversations around business models, sometimes we're told overtly, sometimes they're beating around the bush with the smaller questions, but it's cycling around dominance. Data dominance, which would ultimately be at least the capability to play the monopoly game. So companies and VCs, I think, are really adamant about this. And seeing the business model modes and approaches to how people are going to do that, what does it look like to acquire all the data, what does it look like to garner that data plume that lets us completely win, our curious convo. So I don't know if they're good or bad, but they're interesting to see how many people are thinking about them. It's more than I think most people assume. Damn, and that leads right into the big picture stuff, which is if we care so much about data dominance and geopolitically, like you were saying earlier, why would we? There's so much geopolitical pressure on being the first movers and the ones that have the most data, the most tech, the best quality products. And what that does is it makes all of the ethical considerations and the moral considerations take a back seat. And so that is obviously becoming a growing concern. There's no longitudinal tests on so many things that we're moving forth with. So now, and you're speaking the language right there, in these scenarios you're having, with conversations you're having, it's again, it's how do we update and dominate. So where do you see, how do we bring them? What are the ethics forward? Great question. Right now, I think most ethics in AI is super-duper virtue signaling stuff. I hope to be able to write about this soon, because I'm revolted by a huge majority of it. But I do think that it's exciting to see there's some traction there, and there are people that legitimately care. There's a lot of companies just whipping out a white paper about it, just like when green became a thing. Or it was like, I also am green. We're there with it, but I think it will lead to something more meaningful. Here's what I think it's ultimately about. I think that among the highest powers, in terms of companies and in terms of nations, I foresee, this could be totally wrong, by the way. If we shake hands in 15 years, you might be like, man, remember that dumb thing you said? But I actually suspect it's right, that substrate monopoly is sort of the end game of the highest level businesses, the Googles, the Baidus of the world, and the big countries. I mean, primarily US and China, maybe we could argue the EU, other people can jump in on that. I'm not as optimistic with those folks in terms of kind of the big game. But big nations, big companies, who owns the substrate that houses the most powerful AI and that houses human experience? As we begin to live more and more in virtual worlds, I'm talking like this, right? Not like this, but like this. When we work, when we have teachers that are virtual, when all of our entertainment exists in this other place, a place that is better than the world. It's just better than, it's just more customized. It's to our needs. It's to our preferences. It helps us work. It helps us play. It helps us relate. It helps us grow in our character. When we exist in the virtual world and whoever owns the substrate that houses those experiences and whoever owns the substrate that houses the powerful AI wins. So that's the game. So the pinnacle of the kind of dominance game on earth among the great powers. Now the small powers will mostly try to chip away. And just like when the UN was formed, the little countries, what are they bickering about when the UN was formed back in 46 or whatever. Mostly to make sure the big countries don't dominate everything. So they're mostly playing a game to try to get the equity and the power away from the big guys and play the I'm a little guy and I'm good and get the power away from the big folks. But among the big folks, the Googles, the Baidus, the US, the China, whoever owns that substrate will be as close to kind of deity level as you can get and will certainly be in the best position to sort of wield the might of that trajectory we talked about where they would prefer. And so I think that's a dynamic to, in my opinion, be somewhat wary of and probably somewhat skeptical of, but I think at least aware of. I would consider it to be exceedingly naive to think that that isn't the case. Could be wrong. But that's, you know, you ask, where does that go, this data dominance thing? It goes to the highest level of dominance, my good man. It goes to substrate monopoly, which is sort of as close to deity as an organization could become. At least on this little rock that we sit on. I agree with you. So that's, I feel very similarly, the Googles, Baidus, US, China, that's the type of stuff on the companies or on the scale of governments. The ones with the most data, the ones with the most powerful AIs, the ones that control the everything that we see, do, feel, engage with. And we're, I made this funny, I was just on my fourth 10 day meditation retreat and when I came back I had this graph that I had made. It was just that the better, the more indistinguishable virtual realities get, the more we'll spend time in virtual realities. For sure. Because we just get to design the world that we live in. It's gonna be. The world of atoms is just garbage. I mean, you know, the atoms that matter are the atoms that are in the substrate. So when people can live in blissful, purposive, creative, expansive modes outside of like this nutty worlds of atoms, like strapped in in this regard or plugged in in that regard. However it is, they're going in. Yeah, like you said, when it's indistinguishable or just better than we're going in. And so the atoms that matter, I guess are us for now until we upload. Or if that's even possible, I'm not 100% there. But really the atoms that matter, the atoms that house what you were just talking about. Those are the atoms that matter. The bits world is gonna be where we hang. Yeah, the bits world. Where power is. So then, so then, do you, is, what would be the ideal ethical and moral situation with that transition and that dynamic, yeah. This is a to be continued, I think. But I will say, I think there is an import to simply saying this might be the case. This might be the highest field of power. Like the highest sort of domain in which power is exerted and determined that this is one that maybe we should look at and we should see who's positioning how within that lens, at least that awareness I think means something and transparency on who's doing what which is what we do not from a policing standpoint but just from a market research standpoint I think is also somewhat important. So sort of understanding that that's a dynamic and hopefully having citizenries consciously be a part of non-arms race dynamics. I love that when you said that earlier. I forgot to highlight it, non-arms race dynamics because everything from biotech to neuro-tech to even the AI is now an arms race dynamic. And if we can do the non-arms race dynamic, that'd be fantastic. Fingers crossed, maybe we'll end on this. I am, I'm optimistic that there's probably some way around overtly Armageddon-ish circumstances. Otherwise I just wouldn't even try. But there is a notion that kind of in nature, things eat things and things who live just live because they collaborate when it makes sense but then they compete as soon as it doesn't make sense and the things that persist are the winners that the last words of Alexander who spent a lot of time doing and conquering but not that much time succession planning and died pretty early on and on his deathbed there's a lot of purported last words of Alexander and who the hell knows. But Plutarch attributes it thusly that they said, king, to whom will we give the kingdom? And the last words of Alexander before he died. To whom will we give the kingdom? To whom do we give the kingdom is what they asked him but here's what he said, here's what he said though. Purportedly he said, and there's different interpretations purportedly he said to the strongest and then died. So I wonder, I wonder is it always the strongest and is there a way for humans to be aware that that is our nature, that that viciousness really is, I think a part of biology itself, I'm not even morally judging it, I'm just saying it is what it is. Can we grasp if that's a thing and maybe in some way steer around to collaboration? Yay or nay, I don't know. Or will that collaboration always just be a tool for whatever group is the, it'll be used for strength. There has been a decent amount of push for this whole equity of the nine billion in some way so that it can be a decentralized sort of ubiquity in anything that you want can be just manifested so that there doesn't have to be a strongest of the kingdom, that that could be the best way to progress but biology definitely says otherwise over the last four billion years so it's kinda nuts to distinguish between those two. We all get to go in and have deity level experiences, somebody determines how you go in and somebody manages substrates that you enter and those are the powerful people end of story. Yeah, yeah. Yeah, do you know what's really funny about this conversation is that if you didn't have to go give a talk right now, I'm sure we could unpack this in so much more. Eventually, where is Tech Emergence based? Boston. I was actually here for two years but 80% of my business is east coast in Europe and so two airplanes to London or other places in Europe is just annoying. Well, when you come back we'll sit down for part two and we can unpack more of this. For now, I love the idea of analyzing AIs to be able to help governments and companies most effectively and I'm happy that you're doing that. Thank you for joining us on the show. Yeah, of course. Great to meet you, Ally. Yeah, good to rip. It's been such a pleasure and guys, give us your comments below. We'd love to hear from you. Thanks for tuning in. Also, go and manifest your dreams into the world. Go and build the future. Much love, everyone. We'll talk to you soon. Peace.