 Hello and welcome. My name is Anita Krass and I am the Webinar Production Assistant of DataVercity. We would like to thank you for joining the current installment of the Monthly DataVercity Smart Data Webinar Series with host Adrienne Bould. Today, Adrienne will discuss artificial general intelligence. When can I get it? Okay, that's my inference in there, but just a couple of points get us started due to the large number of people that attend these sessions, you will be muted during the webinar. For questions, we will be collecting them via the Q&A in the bottom right-hand corner of your screen, or if you like to tweet, we encourage you to share highlights or questions via Twitter using hashtag Smart Data. If you would like to chat with us and with each other, we certainly encourage you to do so. Just click the chat icon in the top right for that feature. As always, we will send a follow-up email within two business days containing links to the slides, recording of this session, and additional information requested throughout the webinar. Now, let me introduce to you our series speaker for today, Adrienne Bould. Adrienne is an industry analyst and recovering academic, providing research and advisory services for buyer, sellers, and investors in emerging technology markets. His coverage areas include cognitive computing, big data analytics, the Internet of Things, and cloud computing. Adrienne co-authored cognitive computing and big data analytics, Wiley 2015, and is currently writing a book on the business and societal impact of these emerging technologies. Adrienne earned his BA in psychology and MS in computer science from SUNY Binghamton and his PhD in computer science from Northwestern University. And with that, I will give the floor to Adrienne to get today's webinar started. Hello and welcome. Great. Well, thank you, Anita. It's nice having you on on the other end of the line today. It's a welcome change because that would be bad for Shannon, but glad to have you in that role. And thanks to everybody for joining us today. That was just a test to see if your microphone was still on. Okay, so artificial general intelligence, and I'm going to use the Anita variant where when can I get it. Let's get emphasis. Today, what to do is talk about the area of research of artificial general intelligence. I'm going to start out by presenting kind of a foundations section that looks at how we got to where we are today in AI and artificial general intelligence and what are the differences. I'm going to spend a few minutes on the importance of building systems to play games and why that's been so the way we've approached, we as an industry have approached a lot of these problems, simulating natural intelligence. Then take a quick look at AGI today, some of the approaches that people are working on, some interesting research that's out there, give you a couple of examples. Then talk about the differences between artificial and augmented general intelligence. And finally, as we promised in the abstract, give you sort of the criteria that I use when I talk to vendors that suggest that they have an AGI solution. What are the questions that we ask and how do we determine whether they're actually making progress. So, get right into the foundations. For those of you that have been on some of the webinars in this series before, to be very slight overlap today, two or three slides, but I wanted to kind of set the scene, if you will, as if you hadn't been with us at all before. So, we're going to start up by looking at modern AI versus the AI roots. And I'm going to go into more detail today than I have in any of the other sessions in terms of sort of the historical foundations, because I think it's really interesting to see how the field, if you will, started out in one direction with one set of beliefs. And I don't claim to have been there. This is when I was about two or three years old when you see the dates on this. But looking at what AI was at the very beginning, what it became and where we're going today in terms of where we want to, in some cases, replace natural intelligence, if you will, replace people in specific tasks. But more likely in most of the things that most of the systems are being built today, they're really not so much automating their augmenting intelligence. And when you see things, we did a session last week at the Diversity Smart Data Conference in California on AI and cognitive computing in the future of work. Much of the talk about job displacement or replacement with AI is really focused on robotic process automation, which is sort of distinct. It's taking routine tasks and automating them, but not adding intelligence. So we're going to look at this and kind of get a sense of where's the market today? What are the opportunities and how do we go forward? So the very beginning, back in 1955, and I'll try not to get into the recovering academic mode today. We're not going to go through all the details here, but this is by all accounts, when we think of artificial intelligence, this was the beginning. In 1955, a proposal was submitted to have a conference in 1956 at Dartmouth to do a summer research project on artificial intelligence. And the reason I'm going to spend a few slides on this is because it really does give you a good understanding of what was believed at the outset and what they were trying to accomplish. And so just a couple of points here. This is from the actual proposal. So the study, and this was a proposal because they wanted to get funding for a summer program, but the conjecture is that every aspect of learning or any other feature of intelligence, now they're talking natural intelligence, can in principle be so precisely described that a machine can be made to simulate it. And simulate it's an important word because we'll look and see the difference between sort of modeling something and mimicking it. And here I'm just saying we can do it because the principles are that we can break everything down into component parts and automate them on a computer. And again, historical context, 1955, they want to find how to make machines use language, form abstractions and concepts, solve problems now reserved for humans. And this last part is really important and improve themselves. It was a pretty bold attempt, and I love this last line I added the emphasis here, but we think a significant advance can be made in one or more of these problems if a carefully selected group of scientists work together on it for a summer. 1955, they wanted funding to have 10 people get together and make progress on one or more of these areas. So there are seven areas that they laid out as foundation areas for artificial intelligence. I'm going to go through them quickly. And we're going to see sort of where we are and how things have evolved. And at the very end, we're going to come back and look at these seven to see what is still to be done. First one's pretty straightforward. If a machine can do a job, then a calculator can be programmed to simulate the machine. So here they were recognizing the limitations in the fifties of speed and memory for computers. But the idea is that it's not the stumbling block wasn't the machine capacity, but it was the inability to write programs to take full advantage of what they had. Now, as someone who did my first natural language processing program back in 1978 on a mainframe using 64K, 64,000 bytes, I would have to say it's not just that we couldn't take advantage of what we had. It's that we didn't have much when you put it in the scale of how natural intelligence, how biological intelligence uses resources. But that was the first one. Now, this second one is fundamental, again, to the beginning of AI. How can a computer be programmed to use a language? And I'll tell you right now, we're going to come back to this language. We did a session last year and I did a tutorial last week on natural language processing and how much we would advance. But the key word here is use a language, not understand the language. So the speculation was that a lot of human thought is manipulating words according to rules of reasoning, rules of conjecture. So they were taking a very almost a mechanical approach to how people do things. And we talk about anthropomorphizing on machines. This is sort of the inverse. We're saying that we think that people are processing language in a way that maps nicely to what we have in a computer. So it hasn't been precisely formulated, but this was the idea. So that was the second point. The third one you'll recognize if you've been following what's happening in AI today, that right at, you know, the beginning in 1955, using neural nets or neuron nets, as it was called then, how can a set of hypothetical neurons be arranged to form concepts? There's a lot of theoretical work. They said that the problem needs more theoretical work. This is something that was really the subject of a lot of research early on. And then it fell out of fashion. And certainly if you've been following almost anything in AI in recent years, you'll know that the idea of neural nets is sort of a part of the foundation of just about everything we do today in machine learning. But that was right there at the beginning. The fourth one, theory of the size of a calculation. Given a well-defined problem, so you can, in their terminology there for well-defined, you can know if an answer is valid or not. Looking at instead of trying to do a brute force approach to problem solving, using every possible path, the issue for AI was building something that could have a theory, building a theory to determine the complexity of functions. That's not something that we generally think of as a hot AI array today, but at the end in the summary we'll talk about kind of how that got used in most of the things that we've done in the interim. Five, self-improvement. So aspects of the artificial intelligence problem. A truly intelligent machine will carry out activities described as self-improvement. And here, this is kind of interesting. As Nade mentioned, I started out in psychology, you might start to think of something like a Maslow self-actualization, that how do you improve yourself based on experience and based on context. Six, number of abstractions can be distinctly defined. And here we're looking at a direct attempt to classify and to describe machine methods. That's the key, machine methods of forming abstractions from sensory and other data would seem worthwhile. Now one of the keys in here is from sensory data. There's nothing in the list for this 1956 conference that explicitly looked at auditory input or machine vision, but this is where it fits under driving abstractions from sensory data. And finally, an attractive and clearly incomplete conjecture. I love this that even the conjecture is incomplete. The difference between creative thinking and unimaginative competence thinking is in the injection of some randomness. But the randomness has to be guided by intuition to be efficient. In other words, the educated guess or the hunch include controlled randomness. And that's something that we have to kind of, as we look at some of these approaches, we'll see that this is an area that just sort of dropped off the map in the last 60 or so years. So at the beginning, that's where the early definition of artificial intelligence came from. Those are sort of the seven major areas and there are sub areas and things like machine vision. So where does a GDI or artificial general intelligence fit? And in this diagram, what I'm attempting to do is just sort of give the simplest view of the world when we think of AGI, because it's a term that's been in and out of favor over the last, let's say, decade. The two major dimensions that I want you to think about, and there are some minors that we'll get into, are is the learning model the way the system learns? Something that is guided by external guidance, like interaction with a human being, or is it something that is internal as it processes the data? And this, we don't have to think about this as something that's purely analogous to machine learning, but the idea is that when we talk about learning in any of the AGI situations or contexts, we may try and approach it with a model of how humans learn, whether we're doing things that are supervised learning, where you're training somebody, or something that's unsupervised and there's discovery, something that you learn by observation. So the question is, how does the system learn? And then on the vertical axis, the knowledge domain, we can either have systems that they're very narrowly focused, they're constrained, they're looking at one type of problem, one, maybe one profession, one activity, we tend to break things up either by vertical markets, and maybe we're looking at the practice of medicine, within there we may be looking at oncology, within there we may be looking at a subset that's constrained, or we may be looking at roles, activities that go across businesses. But as you broaden the lens and go from a narrower constrained domain to something that's very broad or unbounded, if you have something that, a system that can operate in multiple domains so that it either has the knowledge to begin with in all domains, and we'll talk about how that's been attempted, or something that can acquire knowledge on demand in any domain, and it doesn't require external input or support to do the learning as it crosses domains, that's where AGI or Artificial General Intelligence fits. The other side of the spectrum, when you have something where there's either more interaction required, more support required in terms of the training and the learning, or that you very narrowly define the domain, then you're getting into specialized, so some people call that weak AI versus strong AI, another term for Artificial General Intelligence. It's really a question of scope and approach. To put that in context, what I refer to here is classic AI, which is the way problems were attempted up until maybe five to ten years ago. If you look at what was proposed for Dartmouth, it was the full range. Everything could be solved, everything could be automated, everything could be identified, partitioned, and taken down to that fine level. So at the very beginning in the 50s, when they were looking at AI without using the term general, it was an attempt to create what we think of today as AGI. That very quickly narrowed the focus, and so most of the work in AI from the late 50s into the 60s and 70s, certainly even into the 80s, was what I'm referring to here as classic. It's fairly narrow. You'd be looking at a problem like perception or understanding or planning learning, but the domain that was being automated or augmented or artificially created was generally pretty limited. And I used a diagram like this recently to show that the two fundamental approaches, we can be doing everything simulated in software. There's very little work in the 50s and 60s in specific hardware for AI that changed 70s and 80s through the hot bed of activity in creating hardware that attempted to mimic biological systems. But you could either mimic or model, and if we look at it today, modern AI, which is classical AI problem solving plus machine learning and just to reinforce it, but I'm saying here in terms of machine learning, what's old is new again. What we're doing today in machine learning, although there have been many advances, is largely what was anticipated as part of the original definition of AI. But it fell from favor so hard and for so long, but I think of it as new again. And the other big component here is the ability to manage big data, and we'll see in a minute how that's influenced the way people are designing these systems. So today, most of what we see in AI is still fairly narrow. We can look at it still with a hardware or software focus. That could be, that could be the fundamentals that we would use for general intelligence, but most of the effort has been very, very narrow scope. So what I want to do now is take a look at one more piece of the puzzle today, which is cognitive computing. And that's one of the areas I spend a lot of time in. This is part of the webinar that we did last month, where we're looking at systems that are attempting to model or mimic, mostly model, human cognition, which I break into four categories, understanding reasoning, learning, and planning, and they're all interdependent. And if you're interested in that and you want to talk about more detail, catch me offline. We have some materials on this. But again, it could be something where the model that captures the corpus, all the information that you're going to use as the basis for decision making. Today, we're still dealing with one domain at a time in just about all of these systems. So it's not what we think of as AGI. And there's so much confusion based on marketing that I just have to put this up. You know, when you're looking at Google Home or Siri or the Amazon Echo, you hear a lot of claims that are hard to live with if you're working in this part of the industry. There's no doubt that each of these devices is much, much more advanced than anything that we had just a few years ago in terms of natural language understanding. But the understanding that they have is really at a level that just allows it to be used in search, lookup, some context within a specific domain. It doesn't take long to kind of get to the point where nobody's going to be fooled that you're talking to a person, right? So understanding is very different from recognition. I'm going to come back to that. But the issue is if any one of these devices had appeared 20 or 30 years ago, we would have just said, wow, that is artificial intelligence, you know, it's over our head. But really, what we have here is some good technology, some good subset technology, if you will. And I think it's a great start for socializing or getting people ready for the next wave. And with that, I'm just going to make a couple of points on how these things are perceived in the world. So this is going back now, 10 years, a little over 10 years, a CNN article, AI set to exceed human brain power. And it's great to take a look at something that's just a decade old and see that it always seems that what we're looking at is about to change the world. Now, that's not to say that it isn't really this time, but we were looking at exceeding human brain power and it's so narrowly defined. There isn't concern in the industry and then computer science in general that the claims that have been made, things like this, with human brain power and then people seeing devices from Siri to Echoes, etc. that there's going to be a backlash. And the last several times that's happened in AI, it was followed by what was called an AI winter, which is really defined in terms of a lack of ability to deliver on marketing claims, that's how I'm going to define it, which was followed by disillusionment, which is followed by a drop in investment, which turned away the research. So I'm just going to make the statement now that I think that we're not in any danger at this point, contrary to some speculation that I've seen recently of an AI winter. We're actually in the spring and there's just a slide here and another one from a report that we did on the machine learning markets, that the VC ecosystems are building up and just about anything out there today that has AI in the title is getting funded. Some of them may be ridiculous. Sometimes it's being what they call an aqua hire instead of an acquisition that little companies are getting bought just for the talent before they actually get any product out. But what I think is interesting here, I'll go to the next one, that companies are getting money from a variety of sources and it used to be that most of the AI research was government funded. There's still some, you know, quite a bit, the Synapse project that I may talk about later. But the one here under digital reasoning where it's Incutel, Incutel is a venture arm of the US intelligence community. And if you're interested in this market, I encourage you to take a look at anything they say publicly about who they've invested in because one of the big areas for them is investing in AI technology that is believed will have an impact on national security and intelligence. So the fact that they're still putting money in to me says that we're pretty far from an AI winter. But anyway, with all of that and all of the stuff that we've seen, there's a tendency in some of the press to say, well, we've solved the problem. And this is an article from last year, The Financial Times, after Google's AlphaGo beta, a master in the game of Go that this AI thinks like humans do. They learn and improve over time. And I'm just going to throw it out there now and then we'll get into some more of the details. That's complete rubbish. AlphaGo does not think like humans do any more than I think like a rock does. It's a completely different approach. And this is more dangerous, that belief that the thinking process is the same as a human than most things in the technology. So quickly moving on. The issue for a lot of these systems is that they're looking for a pattern. And they say that the system has learned. And this reference here, building high-level features using large-scale unsupervised learning. That's a famous example submitted back in 2011. This is when Google built a system with 1,000 machines using 16,000 cores and analyzed images without them or unlabeled images. So it was unsupervised here. And the system learned, and I'm using air quotes here, to recognize cats. And that's something that certainly, definitely 10 years ago, it was almost unimaginable that you could do something that fast and do it at a level of accuracy better than humans today using machine learning. But we talk about learning and we talk about image recognition. I want to make a very key point here. Recognition is not understanding. That system still didn't know the difference between a cat and a dog. It knew nothing less about cats than a three-year-old would. And so I've been thinking about how to make this a clear point because the lines are getting so blurred. But we can build systems to recognize things. We can build them to recognize relationships. We can build them to do statistical modeling. We can have a model as we've discussed in earlier webinars that makes some assumption about where a representation of an item fits in memory and how if our model is correct, then the representation of object X and object Y is stronger if they're closer together in memory. But I'm just going to say one more time, recognition is not understanding. I recognize my wife. I recognize North Korea. I don't really understand either of them. Okay. Now we're going to quickly switch into games and AI and AGI. It's one of the cooler areas. So is it AI or isn't it AI? We're looking at three here that we'll go into in a little more detail. The first one is the CHET competition with Kerry Kasparov and the second one is IBM Watson. And the third is last year's Alpha Go, the game of Go to get in. Just to make sure that we're clear on this because terms change over time. I love this article title. If it works, it's not AI. This was just a few years ago, like 1977. But it's true today. Once something is in production and it's all around us, we don't tend to think of it as AI anymore. And I had someone who probably should have known better who was writing about this recently and they said, well, you know, we had all this expert system stuff and knowledge-based system stuff in the 80s. And where is it today? Well, the fact is it's in most business applications. We have business rules processes. We have business rules. Engines that use pretty straightforward logic and reasoning. But once we get into production, we tend to stop thinking of it as AI. So it's like that moving target. And that's important when I talk about my own views on AI. So why do we do these games and what have we learned from them and what is generalizable? So back in 1956, Arthur Samuel at IBM built a system to play checkers. The key thing there was looking at the rules and how do people play it and how do you anticipate. Checkers, of course, is a two-person game. It's perfect information. You can see, you know what the rules are. They're a constrained set of rules. You know what the other person has. You know what their options are. And so you're trying to look ahead and place a value on it. And it's zero-sum. If you do something that gains you a score, it has to come at the expense of someone else. Chess is the same way. It's a very similar game. Obviously, there are more potential moves. And going three moves ahead in chess, the combinatorics of figuring out what all the possibilities are is much more difficult than chess. And that's why, frankly, it took from the late 1950s until the end of last century to build a system that could look ahead and analyze the moves in a way that was competitive. But just as I criticized the Financial Science article for saying it did it the way a person does it, chess grandmasters don't do that. They don't look ahead 10 moves and analyze every possible combination. There are different patterns that they're looking at that the machine isn't. And so it's solving the same problem in a very different way, taking advantage of its strengths. AlphaGo is a much more complicated game again in terms of combinatorics. And a lot of people thought this wouldn't have been solved for several more years. But last year, using dedicated hardware, dedicated software, massively parallel software, the Google system of go beta grandmaster. So now the question is, is that something, that's a very specialized application? It's very specialized data, very specialized rules. It's certainly not general intelligence. But we start to look and say, are there things in each of these that we can apply to the acquisition of general knowledge to use for general intelligence? A little bit more advanced. When we look at the Watson system for Jeopardy in 2011, now we're talking about a three-person game or two people in a machine in that case. But the big difference is here. It's still zero and some. If you gain some points, it comes to the expense of someone else. But we're dealing with imperfect information and natural language. Now this starts to get to the type of system that we're talking about that can learn. And learn is always in air quotes here. We can represent things in a way that we can apply formal reasoning techniques and probabilistic techniques to make some hypotheses and generalize and have evidence-based processing. So this is the first of these game-based systems. I think is already generalized. The Jeopardy game is an open domain game. So it could have been any topic. And then poker. Sorry, I thought there was something on the line. It's actually something outside the house. We've just had a major storm here in Connecticut. So poker, very different from Jeopardy. It's a multiplayer game. It's imperfect information again. It is zero-sum. But there's the issue of, I'll just go to the next slide, why this is a big deal. Because in poker, the problem was just solved this year, basically, or solved in terms of a level of competence. You're dealing with imperfect information. You don't know what the other people have. You know what's showing on the table. But there's this betting and bluffing. And this, similar in one way to what I was describing with Watson, now we're dealing with generalized learning and problem solving skills that bring in much more than just knowledge of the rules. We start to look at things like emotion and risk tolerance and profiles. So this to me was a huge thing that just recently we got to the point where there was a poker playing system. This is out of Carnegie Mellon that's playing at a tournament level. Okay. But the real reason, another reason that I want to talk about poker even though I'll play the game is that, sorry, going back to chess, poker is important because there's this betting, which we also saw in Jeopardy. If you've seen the game, there is some area where you're making a bet dealing with uncertainty. With chess, it's reached the point that certainly the best of the chess programs with dedicated software are playing at world championship levels. But what's happened recently is sort of a change to the game that the game can be played with teams that are mixed with people and machines. And the reason I bring this up is because we often talk about the difference between artificial and augmented for AI and AGI both. Is it something where you're trying to simulate or replace or supplant an individual doing the job, you're trying to do it better, or you're trying to make them better at what they do? And then in recent competition, one of these crowdsourced things, the team that won actually used a less sophisticated version of chess programming, but used that to guide and inform the team that made the actual decisions. And so what's interesting to me, looking at this, as we start to look at the field of general intelligence, is what happens when we try to augment it and make everybody smarter and give people access to these systems. And so this is one of the first ones I think is an indication of where things are going in other domains. All right, AGI today. When we talk about the AI in AI and AGI, it's intelligence. And when we talk about natural intelligence, the sort of standard metric that we use for people is, what's their IQ? What do they know? Now, going back, in well over 100 years, IQ has been measured using a number of factors. So Charles Spearman, this one who, there was IQ test before him, but he looked at it as a factor analysis, taking correlations between multiple tests or testing different things. And the idea of this G factor or general factor is that IQ, there's some general ability that we measure, and then there's some narrow ability factors. It's kind of like teenagers dealing with things like the SATs, right? You're looking for some natural ability, but you have to express that in a standardized form. That's where the general test comes in. And then you have narrow ability factors. And you kind of factor all of those in to decide arbitrarily, I guess, I would have to say, if someone is intelligent or not. The problem is, when we're trying to say, is one system more intelligent than another? Or is the system reached a level of intelligence in a domain or across domains? There's nothing like that for AI. We don't have an IQ test. I mean, you can certainly build systems to perform on an IQ test, but there's no direct analogy for machine intelligence. So the problem, as I see it, in looking at artificial general intelligence versus natural general intelligence is, we have to, in most cases, get away from trying to do this mimicry. And you may have seen this before, abuse is slide in a different context. But human cognition, we're dealing with 100 billion neurons between 100 and 500 trillion synapses. It really doesn't make sense today, with the technology that we have, to try and build any system that's dependent on a direct mapping to this architecture. So instead, the minimum requirements today for an AGI system are either to have what I call big knowledge and modest processing, or big processing and big data. And the way I look at this is, if we're trying to build a system that can be, operate with minimal external influence, so you think of somebody who's being intelligent, they don't need to actually interact with the community to increase their knowledge. If they have access to data, which may be experiencing the real world, or it may be from a library, you can get that knowledge. So if we build an AGI system, you either need to have a body of knowledge that's corpus across the means, and then you would just have perhaps modest processing. You still need to have some sort of a reasoning engine and a way of managing knowledge and a way of taking knowledge that you already have and creating new knowledge out of it, if you will, from the reasoning. Or you have to have a lot of data and really enormous processing power. And the reason I put it in those terms is because if you don't have that corpus to begin with, that body of knowledge that crosses all these domains, then every time you're working on solving a problem, you need to derive the knowledge that's relevant to the problem. So that's a different approach. Now what we'll see is that there are actually a couple of systems that are being developed that take one or the other of these approaches. So in the design choice for AI in general and general AI in particular, we generally have to start out, I'm overloading the terms here, saying either we're dealing with symbolic logic and processing, mechanical theorem proving using symbolic logic, where everything has to be represented in unambiguous terms and then we can process and try and either do deductive logic, which of course goes from the clauses and if all your premises are defined accurately, you'll have complete confidence in the result or inductive where you're trying to go bottom up and create a hypothesis and you're dealing probabilistically, or starting with a statistical model where you're just looking for these relationships. And I put the lines going back and forth and say, yeah, we're starting with one or the other, but in general, it's kind of a hybrid approach, where you're going to be using some statistical models to back up what you have with your symbolic logic. Some of the approaches that were taken, just give you kind of the overview and then we'll look at two in particular, either to focus on human interaction, the MIT COG project has actually been retired at this point, but that was an early one, to say that the hypothesis was that human intelligence required experience from interacting with other humans and that's based on a psychosociological approach to learning that says we learn from experience, but we learn from interaction. Focus on machine learning or focus on capturing common knowledge. We all know that common knowledge isn't common. One approach is to focus on the brain-inspired architectures and we don't have time to go into that as well as all the software ones, but if you're interested, those are some of the hardware models like the Synapse project that I mentioned earlier, or focus on representation, and this is where we get into philosophy and linguistics where there's still some fundamental debates in the field. Let's look at two now that I think are really interesting and representative of work that's going on today in general intelligence. Open COG, the article here, if I have anything on here that isn't obvious from the slides and you want a copy or a reference, just send me an email after the talk and I'll be happy to get you the original reference. Open COG is a framework for artificial general intelligence and if any of you were at the smart data conference last week, I was lucky to introduce Ben Gertzell, who is the co-author of this and is one of the leading thinkers in artificial general intelligence. Open COG has been around for about a decade now and is now the upper right, the Open COG Foundation. They're actually building a system and tools of an interesting knowledge management system that uses hypergraphs to capture and manage and model knowledge in a hypergraph on a real graph. An edge is connected by two vertices, vertex can have any number of edges attached to it, but in a hypergraph an edge can join any number of vertices. They've got an interesting model that we've been working on for about 10 years and they have tools that allow you to go out and today start working on this and some of it's quite open and easy to get started with so I'd encourage you to take a look at that. Another approach, Psyche, and this is one that I'm particularly fond of because anybody who spends this many decades building something in AI I think is should be rewarded. Doug Linnet also spoke at the conference last week and he's been working on this project, this is here for 30 years, to create and capture what we think of it as common sense and build this database. I've been adding to it year after year and here's a current link. This is Open Psyche and they currently have say 239,000 terms and over 2 million triples. So this is where they've been capturing relationships. It's all in English so that's one constraint, but the attempt to capture common knowledge across domain which would allow you to do things similar in a way to what Watson had to do on Jeopardy and synthesize it here the difference is that they're building this actual repository of knowledge and I would encourage you to take a look at that. So there's two different approaches that are out there right now and probably among the leading ones. The alternative or the, it's not really an alternative but another way of looking at it is instead of building one universal system which is what an AGI system would be is to augment human knowledge and what I think is important to see is you can either build something that's augmented general intelligence or augmented artificial intelligence at the outset. True AGI if you really had something that was fully cross-spectrum self-learning that could be used as augmented general intelligence, but that's when you get into the issue of at what point do we have to have some sort of concurrence and control. So true AGI can function as augmented general intelligence, augmented general intelligence doesn't necessarily wouldn't necessarily be usable independently as AGI. So today most of what we see is augmented. I go back to a quote that you'll see attributed to a lot of folks. Tom DeMarco, early doctor engineering leaders, first person I heard speak of it, a fool with a tool is still a fool. If we're trying to augment general intelligence, the idea is to build systems that know more than you do or can find out more than you do or can help you so that you can make the decision yourself but make it better. And with that I'm going to quickly revisit the 1956 list and see where we've come and then give you my own checklist for how I evaluate these tools and then I'll turn it up to questions. So we had seven things that the founders, if you will, the founding fathers of artificial intelligence set out to work on. I would say that in 60 years we've become pretty adept at programming. We said number one, the limitation was our ability to use the resources. Number two, I would say right now we have incredible ability here. As long as we keep it to the term to use a language, I still don't think that we're close to where we want to be in terms of understanding but since at the outset they said we don't want to be able to have computers use language, that's a big part of the interface today. Neural nets, definitely, even though it went out of favor, it's now the basis for modern learning. Number four, the theory of the size of the calculation. Anybody who's taken a course in fundamentals of algorithms knows that we have a pretty good understanding, well researched, well documented on understanding algorithmic complexity which was fundamental to getting where we are. But the last three, I think that in terms of self-improvement, yeah, if we define improvement as learning then we're okay because we've defined learning in terms of machine learning which is simply improving performance but I think the intent was and still should be to look at improvement in ways that go beyond that. Abstractions, yeah I'm going to give it partial credit. We've done some good things in terms of going like machine vision for example but this still doesn't have the understanding in that abstraction. That's something that we put in, that's something that's built in even with systems like Psyche or OpenCog. There's some judgment there in what abstractions you'd or building tax on reason, ontologies. Randomness and creativity, I've never seen a system where the guided randomness is used effectively but I think that it's something that we will see. Creativity of course, we have machines that have done everything recently from writing music to creating a movie trailer by taking clips and showing an audience different alternatives for clips using emotion recognition software and cameras to see what people like and then building around that. So that may be creative, I'm not really sure I count it. We're creating recipes based on chemical properties which is heavily statistically based. I think that's interesting but I think those last three still need a little work. So I'm going to give you my last slide here is whenever as an industry analyst, whenever somebody says well we have a product and it's AGI, this is what I go through. First of all can I see it? If somebody says we've got it, we just can't show it to you yet. No is just my no, it doesn't mean that it isn't but it means I'm pretty dubious. Key ones, it doesn't require human intervention to learn about new domains and that is a non-starter for me. If it does then it's not general intelligence, excuse me. Can it communicate its signings? Can it ask for help or missing data and knowledge? And I'll give you a good example of one that can when they built the instances of Watson for acting as a doctor's assistant and this is augmented now, not, excuse me, augmented intelligence. The system can know that there are tests that exist and if it had the answer to those tests it could change its confidence in something. So the system can say I would have more confidence where I would be able to narrow down the options if I had this information. I think that's a requirement really. And finally can it learn to learn? It's one thing to have a system that will go out and just look for anomalies or just look for novel patterns but if it can start to build on top of that, think of it as meta-learning, you get through all of those hurdles and that's something I really want to see. So to answer the question from the beginning, when can I get it? I'm going to say not today. There's a lot of work going on and I think that given some of the things that we've seen with psych and open cog in particular that a couple of other folks are doing things with distributed intelligence where it's crowdsourced, I think we're on the verge of seeing some real commercial systems that would meet all these criteria and I look forward to having people actually show me in the next year or two that they've broken those barriers. So with that I'm going to hand it back to Anita and see if there are any questions that we can answer today or here's my contact information. I'm happy to follow up with you afterwards if there are questions that we don't get to today or that you don't think of today. So thanks. Thank you Adrienne. I have a few comments and hopefully a few more questions please add your questions down into the Q&A section or even in the chat area preferably the Q&A and to answer one of our most common questions we will be sending a follow-up email within two business days with links to the slide links to the recording of the session and anything else requested throughout the webinar. We've had a few people that have said it was a great presentation and very fascinating and also I love this comment. Now I can recognize and understand before I could just recognize. Good. So here's a question. RPA are in hype. Some of the vendors are doing with AI focused. How do you see that market or solution based on AGI? I'm sorry could you repeat that? Is it RPA so they're talking robotic process automation? I think so it says RPA. RPA are in hype. Some of the vendors are doing AI focused. How do you see that market? Are there any current effective hybrid approaches integrating humans with live even if weak AGI? Any effective even if weak? You know what I should probably try and draw and I just didn't think of it until I heard this question is sort of a spectrum. AGI is one of those things where we probably need a maturity model so we can say even though AGI by definition these days is strong AI, I think that it's not a simple case of strong versus weak. There are things that are on the way and so if you're looking at that, yeah there's a couple. I hesitate to toss out names of companies that are not really positioned that way but I'll just mention a couple. I did use a couple of the tutorials last week an example from a company called Loop AI. Loop AI has an interesting system that one of the things that I showed was how it was looking in natural language for patterns and relationships and I gave an example that Luke had shared with me that was from Al Jazeera in Arabic and it's certainly not something where I even recognized most of the things in the character set but some of the things that were pulled out of the text were numbers and by looking at the numbers I could see that they happen to be aircraft designations and as I was using this in a tutorial last week I was pointing out that this is a way of natural language understanding again in air quotes where the system doesn't even need to know what language it's looking at because it's not looking at semantics and somebody in the audience actually recognized all the terms in the diagram and started talking about how things are being laid out so that's something where that system can cut across domains and across natural languages and although I've never heard them claim it in in these terms I think that it that would be an interesting example. I've been to Watson cuts across domains like that again they're not billing it in those terms digital reasoning is doing some things the loop AI one is the only one that comes to mind that isn't they all have to have natural language to be able to be used like this that's the only one that isn't attempting to understand the language in a semantic in a semantic level but it's a the person asked the question was to follow up I'd be happy to provide some more thoughts on that great we have another question here open psych is an rgf based system that is queried using is it I'm not sure if it's pronounced Sparkle SPARQ is that part of your model is that part of my model oh when I say the model um is that part of my model to me the model is is the data and the assumptions so you could think of that as kind of part of the core that's the data monitor sorry Sparkle would be the data management the data access level there's nothing about that specifically that is cognitive or AI but what you're getting access to using it the data in that graph database would certainly be part of the model great and thank you Adrian for this great presentation and Q&A but I'm afraid that's all we have time for today just but I'm snowed in send me questions I'm snowed in it's I'm not going anywhere leave it open for some more questions and we'll you can answer them in the you know for us in the follow-up email sounds good all right well just to remind everyone we will be posting the recorded webinar and slides to dataversity.net within two business days and uh and Shannon our executive editor will send out a follow-up email to let you know the links and other requested information and answers to any outstanding questions thank you again for attending today's webinar and I hope you have a great day thank you Adrian for another great presentation on smart data thanks take care take care