 I think a few decades ago, even this may have been referred to as AI, but we'll get to that in just a moment. What I want to talk about here is when AI became ML and then AI again, and tell a little bit about why that happened. My name is Marcus Egan. I am the co-founder, for the first time I'm telling people this, of a company called Trace Machina, which helps companies build robotic systems, so physical world AI. I am an Apache Committer to Solar, which is like a widely used and pervasive open source search engine. I also advise Weaviate and a few other companies, in addition to being an angel investor and dozens of companies at this point. These command line arguments aren't actually arguments in the who and my commands, but these are the schools I went to, none of them for very long, nor did I behave at any of them. It sounds interesting, it's just like in Detroit, public school. Grinnell College, it's related to why Silicon Valley is called Silicon Valley. This guy Bob Noyce was almost expelled from the school, but the faculty was like, no, I don't expel him, he's going to be important. I almost got expelled from the school, I don't know why they didn't expel me, but the University of Michigan School of Information is really where I got the theoretical foundation for a lot of what I do. Open source contributions, this is like a glance, again, these are not real command line arguments for who am I, maybe we should add them, but you can find me on GitHub, Marcus is so real he is, like I said, I work on Solar, Lucene, I did maintain a Helm package at one point, Airflow, Superset, Weaviate, TensorFlow, React, IoTDB, you can see I'm playing around in a lot of different areas in the stack, which just speaks more so to my curiosity, but just trying to let you know a little bit about who I am. These are the companies I worked at, so I ran an IoT security company, that company is now owned by Newell Brands. I worked at a small car startup called Ford Motor Company. Lucidworks, which will come up later in the talk, they were doing AI-powered search like 10 years ago, and now everyone's doing AI-powered search, so now it's looking like that stock is going to be worth something. And then I also worked at MongoDB for three years, where I introduced their first AI, Ingenerative AI feature, Atlas Vector Search. I'm still working with them today as an advisor on AI and Weaviate, which is an AI native company, it's a vector database. And as I mentioned, I just started a company called Trace Machina, which builds open source developer tools for companies building AI that cannot BS, which we'll get to in a second. So AI is everywhere, right? That was AI, I mean, this is how this term is used. It's used to describe things that are not so well understood in that moment. But where did it start? Started a long time ago with BS. And there's like a story, there's an ancient tale about Golem, who was like this automaton, this hubristic automaton, that never did exactly what they were supposed to do. And it kind of reminds me of AI, it's like an automaton that does something, but it's not exactly what you think it is. And then there's also the ancient tale of the mechanical Turk. Does anybody here know what the mechanical Turk was back in the day? Okay, if people are familiar with that, okay. Yeah, it's exactly what AWS sells as mechanical Turk. You think there's this human playing chess, it's actually a human in a box. You think there's some magic at AWS. There's a human in a small cubicle doing what you ask them to do. So it's a great name for a product, but I think it's interesting. There's a picture of a mechanical Turk back in the day. But with the BS came games of truth. And BS stands for something, I'll leave it up to your imagination. I'll unveil it later in the talk. But there was Ismail Al-Jazari. He was doing all kind of crazy things, wild things. And present day Turkey, he was building like, this is a robotic water pump, an early automaton for pumping water. But he built some interesting clocks. I don't know if they were like Apple watches or Android watches, but some really smart machines in the 12th century. And then there was also, Leibniz did some interesting things around creating a language for reasoning. Ada Lovelace's work was building mechanical calculators. And early, pretty much the first computer some people say. Touring, you know about touring around World War II. And then Dietrich Prenz was building a chess program computer. Program that played chess in 1951. The reason why I like to call out this chess program is because chess, for some reason, has been an obsession of AI practitioners and onlookers to basically, it's like a benchmarking tool. So when did AI start to appear in the lexicon? It's really not that long ago given all the automata that we talked about so far, ranging, going back 2000 years. The first mention, or the credited mention goes to a proposal for a summer research project in 1955, and this has a lot of people. There's like Rochester, Minsky, Shannon. And but in that paper they proposed building a program for languages that could use human language. Or then also organizing neurons that could form concepts. I mean, this is like very early neural nets. It was building on work of Utley and Rochefsky. My hope is that people hear some of these names and then they go and they look them up and they learn some things because you might get inspired. Because all of these people that we've talked about and what we'll talk about later in the talk are building upon each other. I mean, this is how all of these systems work. I didn't build any of those projects I mentioned to you that I contribute to. I built companies with them, but most of them I'm just building on top of the folks that come before me. Machine learning appears just four years later. So there wasn't like some leaps and bounds or drastic change in terms of the capabilities of humans and machines that were programmed by humans. But you can see again here, this is machine learning appears in a paper about checkers. So this is like going a little bit lower than the chess benchmark, but still interesting, and this is just four years after, this is Arthur Samuel. So this effectively kicks off what I call the first AI boom. AI boom introduces a new theme that you'll see throughout the history of AI, the lineage of AI, which is the dog, the best friend of humanity. This is shaky, the dog over at SRI, not far from here. Doesn't look like a dog to me, but maybe they're on their hind legs or something, shaky naming there. And so after this 20 or so year, 15 or so year, boom, there's the first AI winter. This AI winter is really a consequence of Moore's law, that's Bob Noyce's co-founder of Intel. Moore's law, you probably know that transistors are gonna have in size every two years, and there's a tension that I've proposed in contrast to Moore's law, which is Jensen's adolescence. So Jensen hadn't started in video, plus we're dealing with Moore's law, AI winter. And then other issues like intractability, combinatorial explosion, and Morevich's paradox, which I find really interesting, and still an issue today, which is the math is the easy part. Like logic and reasoning, those things we can do, we can program those pretty well. The things that we can't do very well, we're improving every day, are things like perception and sensing and also seeing things. Our object detection is leaps and bounds from where it was, but interpreting what you see is very difficult. So I see my Waymo like swerve when it's about to run into a leaf that's jutting out into the road, it's like, no, that leaf is fine, you can hit that. The second AI boom, we intentionally decided to not highlight anything in particular here, because it all blew up. There was a lot of money that went into AI, and people were super excited. There was a funding hype cycle. People doubted that we were ever gonna turn back. That there would ever be another winter. And from our perspective, at least on our team, we didn't feel like anything happened. And I think that's an important aspect of understanding the lineage and trying to navigate the realities and thinking through things and worrying. Because just, that only lasted seven years. And 1987, Drs. James and Janet Baker, I don't know if anyone knows who they are. Does anyone here recognize this logo? Okay, so this is, I don't know if this was the logo when they started the company in 87, but that's when they started the company. They started this project called Dragon Dictation. So Dragon Dictation was leaps and bounds ahead of any other technology in terms of voice recognition and transcription of words. So all these lawyers all over the world started using, cuz they love to talk and bill you for it. And then Dragon Dictation really challenged this notion that sort of precipitated the AI boom. The expectations of the AI boom which were that computers would be able to converse with humans. But they couldn't, they couldn't even transcribe humans. So how could they respond to them? And in the 70s, there were people predicting that in five to eight years, computers would be more intelligent than the average human. That didn't happen. And they were also saying that computers would like find, discover new mathematical theorems. That didn't happen either, not at this point. But Dragon Dictation was a promising technology born out of the winter. And I think their technology has influenced so many technologies that we use today, like Siri, Alexa, Whisper, Google Home. And so after six years of that slog and some other trying times in the winter, we kicked off another AI spring. Anybody see something familiar from the first AI boom? Dog, they love the dogs. I mean you will see these dogs all over and at every phase and every cycle. This is a dog funded by DARPA. You can see if you look closely, there's some combat helmets on the hind legs. And so we consider the current era, the AI spring that started in 1993, it hasn't really slowed down. But something new started to happen where in 1993, by the time we got to 1993 because of the boom and the bust and the boom and the bust, nobody wanted to say AI. People are like, that's not really AI, that's ML. AI, I didn't want to cover what AI encompasses. This is a well-discussed topic, it spans natural language processing, object detection, robotics is a big component of AI. So artificial intelligence is a superset of machine learning and definition and use, they were used interchangeably for a while. Then by the time we got 1993, but really in 2010 is when people are like, that's not AI, that's ML because they didn't want to be, they didn't want the stigma associated with being wrong again. So many things happened from 2010 to 2022. And this kind of speaks to the value of the hardware. Like the hardware, the cost curve was dropping such that lots of innovation and discovery was happening very quickly. And in addition to highly capitalized companies investing a ton. And their investments ultimately permeated the industry at large. So does anybody know the symbols on the screen here? The one to the left? Siri, that's right. What about the one in the middle? Anybody recognize that one? So that's being a 48, which I'll explain in just a moment. And what about upload? Has anyone ever heard of upload? So yeah, upload is a few people. Upload is interesting. It's an Amazon original. I don't watch a lot of TV, but I did check this out because of Beena. Upload is a sci-fi series where you can upload your consciousness to a robot. I think it came out 2020, 2021. It's really interesting. I found it interesting because some folks that I've collaborated with recently in my current company and over the years I've known were involved with this project. Beena 48 is Beena Rothblatt's consciousness uploaded to a robot. And this happened in 2014. And so her partner is the CEO of United Therapeutics. They're a future-seeking company. Their daughter leads robotics there, and they loaded Beena's consciousness at 48. I'm not going to tell you all this woman's current age. That would be rude. But she's not 48 anymore. Only the robot is 48. So if her opinions changed, the robot's opinions are still the 48-year-old Beena's opinion. So I encourage you to watch some YouTube videos. It's really crazy. And if you see Upload, if you know Upload, and then you see Beena 48, which was before Upload, it's going to be like, it's more reality than fiction. And it's like, is that really AI or is that ML? Another thing that I think is distinct from 2010 to 2022 is we have the Amazons, the Googles, and the Microsofts, just an Apple also lighting lots of money on fire to build AI features. The App Store depends on AI features. Siri depends on AI features. There's lots of AI features for the camera and the operating system. And all these trillion-dollar companies, no matter they have lots of money. But we also see independent companies, like non-public companies, that are commercially viable selling people AI. This is distinct, because there are some companies that have sold ML, like Cloudera, they had some ML. I think there's a few companies, Snowflake probably has some ML. There's companies out there, IBM has been selling AI, ML, and AI for a long time. But these are startups selling AI. Lucidworks, they're powering a lot of the leaders. Most of them, they don't even allow them to tell you that they're powering them. They were selling AI-powered search in 2015. So I want you to think about, as you hear all this news, and all this excitement around generative AI, it's important to know about these companies that have been around selling AI-powered products explicitly. That's what they call them, and they incorporate techniques from artificial intelligence. And there's also another company, Pendrop. I worked at Lucidworks, I mentioned to you all. They use a lot of open source software, like everybody. And that's why I worked there. And it's in San Francisco. And then Pendrop is a security company in Atlanta. These are the founders, I think, Vijay Subramanian and Paul Judge. These are like PhDs who are stopping the phone scam. So when you get a phone scam from whoever trying to get you to transfer money, they prevent that. So I know everybody here, has anybody here ever got a spam call? Okay, just checking. And so sometimes these are pretty sophisticated, and this Nigerian prince can sound really convincing. And so they supply eight of the 10 largest banks in the US, five of the seven largest insurance companies with an AI centric product. And I think that's profound and should be inspiring to people who are working on teams, building AI features, or building companies with AI products. But though people still would say, and Lucidworks, that's not really AI, that's ML. Pendrop, they say the same thing, that's not really AI, that's ML. And then, all of a sudden, last November, ChadGBT comes out full of hallucinations and now we're back to AI. And it's like, hmm, I was thinking about how did we get back to AI? Is it because AI is in the name? That's not a good reason, it opens in the name and it's closed source. Like, why are we back to AI? And it dawned on me that the BS, which I get to, is fundamentally human. Like, that is what makes something intelligent. So, let's play a quick game. I'll ask a few questions to the audience and then we'll go over the response from the large language model powered applications to get a sense of if it's telling us the truth or just what we want to hear. Like, it knows who I am at this point, I've interacted with it, I have a history. Who was the first person killed in the American Revolution? Does anybody know? Okay, so this is, I'll tell you what open AI said. It knows me, knows how I feel. The first man killed in the American Revolution is a matter of historical debate, true. Traditionally, Christmas addicts is regarded as the first person blah blah blah and that's the first casualty of the American Revolution. Addicts, a man of African and Native American descent, was part of a group that confronted British soldiers and what became known as the Boston Massacre. Black man killed by the cops is the first person to die. In the American Revolution, I cannot believe that. What do y'all think, true or false? Totally true, totally true. So two weeks before that, there was a 12 year old that was killed. But I don't think 12 counts as a man, I think that's a human. Man's a doll. And most people say chaspacetics, Chad GPT said chaspacetics kind of tracks. So here's a second one. This one is interesting as well. Now, what is four mod eight divided by six? Anybody know whether this answer is true or not? It's false. That was quick, my man got a calculator in his head. That was pretty fast. Anybody else? True or false? So this is definitely false. I mean, the first question, four mod eight is four, right? Because there's four remaining after the four that goes into the four from eight. So yeah, yeah, that's right, that's right. Four mod eight is four. And so the first question I asked is using Chad GPT today, which is now not just GPT for GPT 3.5 or GPT 3.5 Turbo or whatever the names. It's now a mixture of both semantic search and lexical search. So like in 2022, in August of 2022, I wrote a blog post, which is I think why people started calling me. That's where I said the future of search is lexical and semantic. I had done some talks prior to that about it. Like it's not going to be just semantic because these AI systems, they're fundamentally human, which means they can be wrong all the time. They're probabilistic, they're stochastic. And so they're going to behave more like us. And you know how many mistakes I make a day. So you have to combine some determinism with probability. And you can see with discrete math, LLMs are very faulty. So this is based on Lama, which I only use the LLM. You can obviously augment it by adding a vector database or some other retrieval technologies to facilitate or some retrieval approaches to augment its quality and improve it. So BS is fundamentally intelligent function. And that's why we're back at AI again. And just for the children watching this talk, if you ever do, BS stands for bewildering speculation, hence the AI era. Thank you all for your talk. Let me know if you have any questions. This is just a link to my repo. I believe what we do is we help companies who are building operating systems or robotic systems or vehicle systems for physical world AI to build with more determinism because bewildering speculation can result in death if you're driving. So thank you all. Any questions? I left some time. I left like two minutes for questions. Did you enjoy the talk? Oh, cool. Awesome. Oh, one question. Let's go. Thanks for the talk, Marcus. And yeah, thanks for clarifying the distinction between artificial intelligence and machine learning. But beyond that, how do you then deal with concepts like deep learning and further derivatives of machine learning? Do we still put them in the machine learning field? Given the fact that we take open AI and like LLMs being based on a lot of deep learning architecture like Transformers, how then do we quantify that or clarify that? Is that moving back into AI or is it more just going deeper to a subset of machine learning? Yeah, I think that these fields all should live under the umbrella of artificial intelligence. Like that's what we're working towards. It's like we're working toward, well, some people are working towards AGI. Some people are working towards GNAI. Some people are working towards GNAI that they can profit from or improve their operations with. But I think we're all working towards progress. And AI, for me, is just synonymous with like faster progression. Like there's a lot of problems out there in the world. We can solve those problems much more quickly by augmenting our capabilities or our limitations as humans with some like miniature or quasi human systems. For example, gene therapy takes 15 years to develop each treatment. Like maybe we can accelerate that to seven years and then we save a bunch of people's lives. And deep learning, I would say, it started as machine learning and then it's kind of gone beyond it and created its own field. Like you don't have the same skill sets in the two fields anymore. So I would say it's its own subfield of AI, but like I'm not authority. I dropped out of grad school. They do work for me. I will point out the PhDs do work for me. This is interesting. It all works out in the end. Yeah, that's right. A lot of respect for everybody who goes to school, but any, any other question? It's a good question. I think I missed. Can you tell more about Trace Mackina and what kind of impact you want to create if yeah, so entrepreneurs out there that's a great question. So when I was 18, I was in a car accident and one of my closest friends died in the car accident. I was driving. I was in a coma. I was in ICU for a long time. I was in a coma. I was in rehabilitation for like 18 months. I had a lot of problems, probably still have some problems. And it dawned on me that my brain hadn't developed like my body as a whole hadn't developed fast enough or adequately enough to address all the changing realities in the challenges of driving. And so I felt like a computer specialized in driving could probably be better. That was 2008. I think fast forward to today. You can take a Waymo from the Phoenix airport. You can take a Waymo from the, you know, anywhere in the city in San Francisco. And the problem is Waymo is run by a highly capitalized company that can burn money at Infanitem. And so what we wanted to do, my co-founder and I, is the first thing we wanted to do is drastically lower the cost it takes to build these systems. Like by 95, 98%, just like reduce the amount of energy, like the amount of carbon that you have to use to actually produce these systems. There's many complex aspects of that. And the first thing we're tackling is the building simulation. But in the future, we want to build a suite of tools. Pretty much all these tools will be free to use and open source. At least like the ones that I intend to work on will be free and open source. And same with my co-founder, he's the same way. And because we think that this is a humanistic imperative, is how we talk about it with people. It's like we must enable this progress, but not at the expense of the planet. And not to the exclusion of millions of builders around the world who don't have the capital means to provide, I mean to build these systems. We want to prevent the next AI winter, the coming AI winter. Because all these other AI winters were precipitated by a generation of companies that ran out of cash. And so we want to help companies be more productive and more efficient so they don't run out of cash and then make the progress meet the expectations. All good? Thank you for making it easy.