 Hello and welcome. My name is Shannon Kemp and I'm the Chief Digital Manager of DataVersity. We'd like to thank you for joining the current installment of the Monthly DataVersity Smart Data Webinar series with Adrienne Bowles. Today Adrienne will discuss a pragmatic AI maturity model. Just a couple of points to get us started. Due to the large number of people that attend these sessions, you will be muted during the webinar. For questions, we'll be collecting them by the Q&A in the bottom right hand corner of your screen. Or if you'd like to tweet, we encourage you to share highlights or questions via Twitter using hashtag smartdata. If you'd like to chat with us and with each other, we certainly encourage you to do so. Just click the chat icon in the top right hand corner for that feature. And as always, we will send a follow-up email within two business days containing links to the slides, the recording of the session, and any additional information requested throughout. Now, let me introduce to you our speaker for today, Adrienne Bowles. Adrienne is an industry analyst and recovering academic, providing research and advisory services for buyers, sellers, and investors in emerging technology markets. His coverage areas include Cognitive Computing, Big Data Analytics, the Internet of Things, and Cloud Computing. Adrienne co-authored Cognitive Computing and the Big Data Analytics published by Wiley in 2015 and is currently writing a book on the business and societal impact of these emerging technologies. Adrienne earned his BA in Psychology and MS in Computer Science from SUNY Binghamton and his PhD in Computer Science from Northwestern University. And with that, I will give the floor to Adrienne to get today's webinar started. Hello and welcome. Great. Well, thank you, Shannon, and welcome to everyone who's with us today, wherever you are. I hope you're having good weather. We're finally maybe going to get a thaw today here in beautiful Connecticut. So let's go. I want to talk about AI maturity models. And to start, do you start? I'm going to, there we go. Give you the agenda. In our time today, I want to quickly go through some definitions and little context for maturity models in general and then tell you what I have in mind for building this model for different aspects of artificial intelligence. Then we will look at what I call the 5 plus 1, the technology categories within AI and supporting AI and look at the maturity of each of those so you can do an assessment and talk a little bit about how to evaluate your own organization or enterprise and finally look at your applications to see how they fit with the current state of maturity for the different technologies. And we'll wrap up with a couple of recommendations and hopefully some Q&A. So with that in mind, I'm just going to use the term AI today, which historically has been artificial intelligence but I want to point out that we're seeing it now used for automated intelligence or more commonly augmented intelligence to supplement natural intelligence. And I use the term amplified intelligence when I'm talking about what we think of as making people more effective. You're not really making them smarter, which is the way it's positioned sometimes. But the point is for all of these, today we're just going to use the term AI because that will capture all of those different aspects, whether we're automating, whether we're augmenting, whether we're amplifying. It's technology that allows us to perform some functions that at one point were just the province of human beings. So with classic AI going right back to the very beginning of the 1950s, the key areas were things like problem-solving, natural language processing, understanding, learning, and perception. When we talk modern AI, we still have all of those different categories, but in general AI today, the fundamental changes have been with the introduction of deep learning beyond machine learning and the general availability of big data for algorithms of all types, but in particular for machine learning that requires a lot of training data. So I say modern AI, I'm really thinking of the problem-solving abilities of classic AI, plus deep learning and big data. Now, maturity models, I want to kind of set the context here. If you go back to Abraham Meslow, a 1943 psychologist looking to explain human motivation, created this idea of a hierarchy of needs. And you'll see different terms in there, because over the course of his career there were some refinements. But the basic idea is that to understand human motivation, you have to take this hierarchical view that people need to have their physiological needs satisfied first and you can't really be thinking about things like self-actualization or esteem or your social position. If you can't breathe or you don't have food, going higher in safety, then you get into social needs, etc. So the key here is that a lot of the models that have been built since Meslow did this, kind of build roughly on his framework with the idea that maturity starts with one level. There have to be some things that are kind of a starting point, a basis, and then gradually get refined and get more progressively solve more difficult problems, for example. But according to Meslow and the folks that follow, the hierarchy of needs type of model, you have to have everything below the level you're working on satisfied, otherwise you have to go back. So if you're working on self-actualization and something happens to threaten your safety or you have some physiological needs, you have to go back and take care of those before you can progress. And some of that thinking has been found in most of the models that we see for technology maturity models. So back in 1988, the SEICMM, the Software Engineering Institute, out of Carnegie Mellon, sponsored by some federal funds, brought together a number of folks to build a model of the software development process to understand to some extent why some projects failed, but also to help people improve their processes with the idea that a more rigorous process, a more formal process, as long as the criteria for defining rigor was reasonable, as you moved up on these levels, you would have better results. So in the original CMM, the Capability Maturity Model out of the SEI, there were five levels. It went from initial, which was relative chaos. You have people, you have a project, but there may not be any sort of formal processes in place through repeatable. You perhaps learned from some of your previous projects, but it's still fairly loose getting through defined manage and then the ultimate, the self-actualization, if you will, of software development processes was optimizing. It was interesting back in the late 80s and early 90s. I was very active in defining software processes, working for methodology companies. When the criteria was first introduced to the population, just about everybody was at an initial and repeatable level. It was somewhat discouraging to have a schedule, if you will, and go in and do the assessment. And by the way, the CMM was always intended to be a self-assessment. So you've got a book of things, questions to ask. And most people were coming out down at the bottom. Over the course of the years, people used that kind of like the way manufacturing started to adopt Six Sigma. So the CMM guidelines were used and people started to improve. We saw largely outsourcing companies early on or large professional services companies obsessively focused, and I mean that in a good way, on building their processes so that they could assure customers that they were following a CMM four or five level process. And over the years, SEI and others have expanded the CMM so that there were different maturity models for different aspects. And now the term is thrown around pretty loosely. So what I want to do is move you up from 1988 all the way to 1993. And we're all very lucky that this is a rough copy of an article that I wrote in 1993 with an artist's conception of what I looked like so you can actually see the mustache or the dark hair. But the point I want to get across here is this is something that frankly I've been working on for a very long time, the issue of processes and how to direct your focus. And back in 1993, my publisher's paper when I was doing the methodology column for Object Magazine saying that we needed to go beyond the process measurement and if we want to predict success or improve our odds of success, add another couple of dimensions. So the three-axis model that I have here, figure one that I've just expanded a little bit, goes from the CMM view of initial through optimizing and has an analogous set of dimensions for people and for products. My thought back then was that for people from the process to be optimizing, you had to have people performing certain tasks in a reliable, valid way. I did a lot of work with the defense community, so we talked a lot about things that were valid and verifiable. And so we had a level here going from the initial. Somebody was just hired through training. They're educated. They have mastery of certain skills, and this is all skills oriented, and then a certification process. And it was after this that some of the certifications you may be familiar with, particularly in security and in auditing areas like that, have become pretty popular. So now we see organizations that are looking for those certifications as credentials. And the third dimension that I had here was looking at the products. And if you put this in context in the early 90s when it was doing this work, a lot of the products that were out there to help people develop object oriented software were fairly rudimentary. And I was looking to find a way to quantify what made one better than another. And so I had the same idea of something that would be certified, the way we certified processes and people in a particular had the defense market in mind. Well, fast forward a little bit, and I'm still expanding on that. But now I think when the process is still important, people are always going to be important now. But when we're looking at an emerging area like AI, and it has been emerging since the 50s, I'm going to suggest that we look at two dimensions, the technology itself and how it's refined. I'll give you some thoughts on how to measure that. And then the application that you're trying to build with it. And so we're going to have some measurement of the alignment between application requirements and the technology that's available. And that's really what I'm talking about when I talk about something that's pragmatic. This is not something that's going to get a dissertation award. It's not going to get a Nobel Prize. But if it helps you to identify the right application and the right set of technologies to build that application, then it will have served its purpose. So to look at the technologies and the applications, I've got sort of a simplified diagram here, and we're going to go into a little more detail in a minute. But for the technologies, when I say that something has to be refined, it's really important to me that I don't actually label the axis here the way the SEI did with CMM. I'm not going from one to five, and once you've reached five, you're optimizing and life is great. We're dealing with technologies that are evolving and constantly evolving, and there are going to be some disruptive steps. So it doesn't make sense to me to have kind of that five-level model. I'm looking for things that we can measure. We can ask the right questions about whether the technologies themselves are robust, reliable, and at some point certifiable to show their fitness for purpose. And with that in mind, we're going to look at it for the different AI technologies. I've just got a crossbar there rather than having scaled tick marks along the side for what I call the utility threshold. And the idea is that there are a lot of technologies out there, and then there are a lot of vendors, obviously, for each of the technologies. They're usually more than one vendor. And what we want to do is establish at what point has the technology itself as instantiated by a vendor in products reached a utility threshold. When is it safe to use? When is it going to return more value than it consumes for you trying to use it? And that only makes sense to me. If we're trying at the same time, we're not just creating applications for the sake of demonstrating that a technology works. It has to fit with a business need, and that's what the application is all about. So on the application side, we want to go from something that's initial. We just know we want to build something in particular space. I want to build an accounting system. But the process, the discipline, if you will, it's not a science, of creating a set of requirements and then going in and designing the system. We have to make sure that as we look at the different requirements, as we document those, as we quantify those, can they actually be produced? We can obviously write a requirement that can't be fulfilled today. But we want to be able to map the requirements to the available technologies up to and including anything that's above the utility threshold as an AI technology. And so for the application, we're going to ask questions about the data that it requires. We're going to ask about determinism and uncertainty. Is this something where the system has to produce one answer and has to be verifiable and you have to be able to show the evidence to support it? Or are we dealing in a domain, I'd say like medical diagnosis, where there may be multiple explanations? They may all be true. They may have different contributions, different disorders that are contributing to a set of symptoms. So the requirements, we have to look at the level of certainty that's required and then determine based on the level of certainty, the data that's available. Is there a technology or combination of technologies that meet that utility threshold? So you can't look at one without the other if you want to have a reasonable chance of success. And the last one on the application side is explainability. We are certainly in the stage today where we can build some systems that will meet the criteria, meet the performance criteria, but the systems themselves are so complex that we have difficulty if we can even begin to explain how things are being solved. So if that's important in certain domains it will be, perhaps legal or medicine, then the utility threshold actually has to be raised on the left side. So the model for identifying what I'm calling mature enough, which is what this utility threshold is, we're going to look at it today according to a breakout of several different AI technologies. And it's hard to tell from this. It's actually five plus one. There's the core cognitive technologies, which gets you understanding, learning and reasoning. Then there's a human computer input for the cognitive, human computer output. And then the one on the right, machine to machine, IO, those are actually separate. I'm just grouping them together. So those are all the AI technologies that we want to evaluate for maturity. And then all the way on the left, the foundation, these are technologies that aren't strictly speaking AI. They're not simulating or modeling any behavior or any processing that we would ascribe to human cognition, but they're required in order to support it. So we want to look at where each of those is today. So the goal we're building this model is to understand, find opportunities to build enterprise applications that leverage the build on AI technologies that are, as I call it, mature enough. They don't have to be mature. We're not going to have a 100% scale and say, we need to get 75 as a passing grade in order to have it move over. These are going to be sort of sliding scales. But what we're looking at is mapping the application requirements to the technologies and finding things that we can leverage today. To do that, I'm going to give you this stack and some of you that have been with us on other webinars have seen different variants of this. But basically, this is how I look at the world of modern AI. And the core cognitive functions, the reason that's in a different color, is that's where I spend most of my time looking at the whole area of learning, understanding, and reasoning. And if you're to look at the market today for what's commonly called cognitive computing, that's really the basis for cognitive computing. The foundation technologies, data management, analytics, and cloud. Again, these are not AI, even if you get into predictive analytics, it's not really AI. But in order to do experience-based learning or abstraction and understanding, there should be a symbol on the keyboard for air quotes there, because when I say understanding in an AI context, it's really artificial understanding. I'm not claiming that the system understands in the way that a human would. That's a hot debate, and I take the position that from my behavioral psychology training, if we can have predictable behavior that looks like understanding, that's close enough. Above that, we get into the human-computer interface, because if we're going to do things, we have to actually communicate with people, and then into the slightly more futuristic things in terms of augmented and virtual reality. The oval on the right in gray just shows that there is a hardware component to most of these things. Today we're really going to focus on software. But if anybody's interested in the hardware side, be happy to take that up offline or maybe do it in a future webinar. This is a diagram that I use to really explain the whole landscape, and you can map one to the other, the previous diagram to this. So if we take the circles in the middle, we start with a model which represents all the data and all of our assumptions, and then we build this cognitive engine, if you will, that does the understanding, reasoning, and learning. And in order to get data in from the left side, coming from humans or machines, or to produce data or knowledge on the output going to humans or machines, we have to have this data management layer. And so if you think about it, the circles in the center represent the baseline. That's the cognitive baseline, but they're supported by data management. So we're going to break this up into five categories. It's the cognitive, it's the human input, the human output, the machine input, the machine output, and then the plus one, if you will, is the data management or the foundation technology. So we want to look at each of those to see where they are in terms of maturity or readiness. So starting at the bottom, if you will, is level zero. This is data management, and this is pretty straightforward. The data management that's required for a modern AI-based or cognitive-based system is really not something that is peculiar to AI. Anything that's in here is something that can be used for more traditional transactional processing. And so as you might expect, this part of the data management as part of the infrastructure is pretty well-defined, and even though it changes rapidly, it meets all of our criteria for performance as a ready-enough system. So I've just given a few examples, things like IWIMA for handling unstructured information management, Hadoop. Most people will be familiar with Hadoop and the Hadoop file system, Spark, RDF is the resource description, and OWL is an ontology language. So we have all of these things, and what you build with them doesn't have to have AI in it, but the amount of work that's gone into these, the maturity, if you will. And of course, there are also a lot of commercial systems that fit in there, commercial database systems, whether they're relational, hierarchical, graph databases, whatever you need. All of that today is, here's my first maturity model, all of that is pretty well understood. This part of the diagram shows kind of going from red to green, we're well into the green for the foundation technologies. So that's one of the kind of questions that you ask again to see if something meets that criteria. We're all based on deployment history, examples of successful implementation, and if people are interested, I can share some of the more detailed questions that we use to do this. So basically, again, I like to show that there is some headroom, these things are going to continue to get better, there's going to be more investment. But we are already at the point where the foundation technologies are absolutely solid and there's no concern for most types of complex data that you're going to be having to process, complex whether it's really high velocity, whether you're looking for an infrastructure to allow you to take historical data and match it with or mix it with real-time data. I was talking to some folks, they're doing analytics or enterprise analytics at a grocery chain that has about 3,000 stores nationwide. And they have analytics that look at the heat signature of people walking through the store so they can see there are five people walking together. And based on that start to analyze and look at historical data, what's happened in the past when we have this pattern and predict are they actually a family or is it five people that are randomly together and then predict based on that when they're going to make it to the checkout and assign personnel, all of that is really well-established. So the foundation stuff solid today. The next is, and this is a key point, AI maturity isn't evenly distributed. So I don't have a one number, this is your AI maturity. I think that would be overly simplistic and pretty useless. So what we're looking at here is what is it for each of the five categories. It's starting with the cognitive core, understanding, reasoning, and learning. Even there, there's a difference. Some of these are more developed than others. But what I want to look at is what's the maturity for that whole area. This is for understanding. And you may have seen this diagram when I do a natural language understanding talk. This is actually complete, completely computational linguistics, if you will. It's all done by mathematics. The problem that's looking at the text doesn't try to ascribe meaning to the individual tokens. The way you would, in something like a compiler, it's looking for patterns. And so in this case, it happened to have been taking this input, some articles from Al Jazeera, and it found relationships, which when presented to a human look like understanding, if you will. In this case, without knowing the language or the symbology, if you just start to look at the numbers, you might be able to determine that this article or set of articles that was analyzed is talking about aircraft, so 737, 47, et cetera. And that, again, is where I get into the air quotes. Knowing that we have something like this and knowing that we also have systems out there that are very good today at analyzing text and identifying parts of speech, relationships, and semantics to get to some of the intent. Our current state today for cognitive understanding is really certainly above the threshold for utility. For reasoning, that was the second we have understanding, reasoning, and learning. For reasoning, a lot of that, it comes from formal logic. Going back longer than I'd like to think, we're studying different approaches to quantifiable formal reasoning software. Symbolic logic versus some of the other representations, this is well understood. You can get access to algorithms today that are mathematically provable in terms of their ability to solve different types of reasoning problems. I just have an example here. This is from Microsoft Azure. But the tech titans, as we call them, basically all have symbolic logic reasoning models that form the basis of reasoners within a cognitive system. So that's pretty mature also today. And then the last one is, of course, machine learning. And although the way the popular press presents it today, it looks like we had a sort of a hockey stick graph that we were chugging along, chugging along, and all of a sudden a few years ago, the world exploded and deep learning is suddenly successful and now that's mature. The reality is that if we look at the different types of learning, focus on supervised versus unsupervised. And supervised is the approach that requires more data, more training data, tag data, if you will, than unsupervised. Each of these has progressed enough in the last five to 10 years, I would say, that each of the approaches, whether it's general or deep or supervised or unsupervised or reinforcement within supervised. Each is either on or above that threshold. And so when I look at it today in the core, oops, sorry, the core cognitive functions, again, reasoning, understanding and learning, I think that in total, they pass the utility threshold, some are advancing more rapidly than others. What's happening in reasoning is not advancing as fast as what's happening in learning because, frankly, we've been at it longer, I would say. And the standards, the standard approaches for reasoning are not that they won't go any further, but they had their rapid acceleration earlier than what we're seeing with deep learning. And now, of course, there are alternative approaches out there. But the point is from this, and we'll pull it all together in a few minutes, that we're already at the point where you can find one or more cognitive, core cognitive functions that's going to be performed at an industrial strength level, which is the criteria that we're looking for. So next, we'll take a look at the human input. So the problem here, and this is shown as if it's outside cognition, the reality is that these are sort of fuzzy boundaries, if you will. But the problem that we're trying to solve here is perceptive input. And by perceptive input, this is where I like to talk about the idea that natural language is much more than the spoken or written word. To understand communications, which is what natural language is all about, you certainly need to understand what people are saying. But what people say isn't always what they mean. And so we need to incorporate technologies to understand gestures, expressions, language with text and voices, what we first think of. But that's when we start to get into human input that includes things like vision and visual analytics. And what we're trying to do in this data management in this layer here is get from the intent that someone has into a way of representing it and looking and identifying and codifying and storing, if you will, the concepts, the meaning and the emotions, so that you have a better understanding of what people are really trying to express. My favorite personality on this is Michael Hayden, who retired after being the head of the CIA and the NSA, who said, you're not just responsible for what you say, you're responsible for what people hear. And that's what we're trying to do when we talk about advanced HCI, or advanced human computer interfaces, is have systems that can identify more than what is said and identify meaning from visual cues and from auditory cues and from subtleties in language that you're not going to find in a direct translation. So it's hard. And where this fits, when we look at all of this with perception, is humans, natural language, natural communication, language, I would say, includes certainly the words, the affect, but things like touch. There's a different meaning when you speak to someone and you put your hand on their shoulder, for example. And so we want to look at what is the state of the art today in all of these different types of perception that complement just the straightforward, I don't want to say simple, but the more straightforward problem of identifying syntax and semantics. Something can be absolutely perfectly reasonable as a sentence, but the inflection will change the meaning, the facial expression will change the meaning, and we want to look at how that fits with maturity. Natural language understanding, we start out with voice and text going through syntax and semantics, and then you have to model it. What I'm suggesting here is this part is all really, really well understood today in terms of being able to capture it. It doesn't mean when I talk about understanding, I always have to come back to this. It doesn't mean that we're doing it the same way a human does. We may be capturing different information. But if we entreat the system as a black box, the way we would treat a person as a black box in an objective experiment, we can come pretty close to mirroring their behavior. So the thing to remember here is that the humans, when you're doing this and the system on the right is a cognitive system that's a human being, you're trying to go from images, language, sound, gestures, et cetera, to understand, codify, create a representation of the intent and the meaning. What was that person trying to communicate? And that's what the modern human computer interface is trying to do. So I'm going to say, and I'll show you all of these together later, but you probably noticed from this that it's just above what we call the utility threshold for that input. And the reason I have it there is it's certainly we have a very good understanding and maturity and performance levels with things that are just looking at text, whether you're dealing with Siri or Alexa or some of the systems that are becoming embedded in almost every appliance and application. Those things do meet the minimum utility threshold. So it's out there right now that is trying to identify affect and go beyond it to understand the meaning and intent. That is maturing, but it is not yet mature. So we could split this HCI input into three or four categories too. And this is sort of the weighted average. Human output, the third part. And here what I'm looking at is when I have this model and I have the knowledge representation, I have my corpus, I have all the data that I started with, plus all the data and the knowledge that I built up based on interaction with humans and machines. When the system has to produce some output in a form that's suitable for human consumption, how ready are we to do that rather than just crank out a standard report? How can we customize this? I had a great conversation recently with someone where they're concerned with anonymity, and so they want to not keep track of information about the person that's putting data into a system. But the problem with that is if you don't know anything about the context of the person that's putting it in, the same input from two different people, they have completely different meanings. And if you're using that in your model as the basis for generating it, then you're going to get your output wrong. So what we want to look at is, okay, let's assume that we've got the input right. We've got the knowledge in there right. How do I create human-centric, if you will, or human-oriented output? We have natural language generation technology, which is getting better all the time today. There are a couple of companies that are doing really interesting work in that. I'll give a shout-out to the folks that came out of my old department at Northwestern at Narrative Science. Doing a really good job looking at how to construct or generate narratives from data with some contextual information from the model. A lot of the things tend to be more chatbot-like where there's stimulus and response, but it's not really tailored, if you will, to the audience. So that's one part of it. The other thing is more dealing with the motive output. We can start to go beyond just producing narratives and just producing text to using what we call emotive text-to-speech, where the emotion shows in the speech, if you will, using avatars. And I'll have one example here. This is out of New Zealand. Some folks are doing really exciting work in demonstrating through the emotional response where the facial expressions on the avatar, based on a neurobiological model, what's the context and what's the intent. This stuff is all really interesting. I think it is the future. I don't think it's the distant future. I think we're going to see more and more of it. But this part of the advanced human-computer interface is not as mature as what we have on the input side today. So with that in mind, I'm ranking the mature enough for HCI output that's slightly lower than for HCI input. Again, you could split that into subcategories. It just gets too cumbersome. So natural language generation is a little higher than emotive generation. But for almost any problem set that requires natural personalized output, we are now above that threshold. The last two here, four and five, I'm going to look together because I knew that in the interest of time I don't want to try and divide it out when we were talking about machine to machine, or as I'm putting it here, machine to an AI machine or an AI machine to another machine. When we've got that sort of communication in general, we're dealing strictly with highly structured or surface structured data because you're communicating between machines. And those things are pretty well above the threshold. So that doesn't really depend so much on the actual AI inside. It's similar in nature, if you will, to what we see with the infrastructure or the underlying foundation technologies. Okay. So putting them all together, where are we today? We're in a situation where clearly the foundation technologies are very useful. They're useful with or without AI, the machine to machine communication, the IO is just about up there. The core cognitive capabilities in the middle, which again, understanding reasoning and learning, those are I would say mature enough for almost any application area. The reason it's not higher perhaps is that one of the things that we need to look at is whether or not these systems can explain their results. And in many cases, the more complex they get, the more difficult it is to do that. But for almost any application, we're going to look at applications for a minute after this, there is something out there in terms of this core that will produce reasonable results at an enterprise scale. Now for the HDI input and output, as I just said, the input is a little more evolved, if you will. We have perhaps more research, certainly more vendors working on understanding languages and gestures and trying to detect affect on the input. That is further along. I'm happy to talk to folks if you're interested in that, you can see, but I still show that as barely above the threshold and slightly below that what we have for advanced output. Again, the reason that both of these are so low compared to the other technologies is that I'm using perhaps a more rigorous scale because I'm looking at trying to identify concepts and intent rather than just what is being said. I'm trying to understand on the input side what is meant, not what is said. It's a lot easier. We can do translation without knowing what is meant to some extent, but I'm holding these to a higher standard. So I think that's where we need to see more of the investment going forward. All right, I'm going to quickly go through the next section, which is the number one frequently asked question that I get when I talk to people who are interested but not yet invested is, is AI ready for prime time? And I think you'll probably guess from the previous slide where everything was at least barely above the utility threshold. My number one answer is yes, it's ready, but that's not the whole story. Having technology that's ready and having technology that is right for you, it's sort of the difference between talking real time and right time. Just because we have that technology doesn't mean that it's a fit for the problem that you're trying to solve. So my second part of the answer is, yeah, AI is probably ready for you, but are you ready for AI? And I break that down into those other two dimensions, the people and the application. And in the interest of time today, I'm not going to talk about people and skills. If anyone's interested in that as a topic, happy to have a separate thread on that or we could do a webinar on that at some point. I want to look really in the last few minutes about how to identify an application that fits with the types of technologies that are out there today and can benefit from those five different AI categories. And a big part of that is understanding how the requirements align or map to the technologies. And within that, a big part is understanding the appropriate types of data that make an AI solution the right solution. So these are the questions that when I'm working with a client and they're trying to decide whether or not to fund an application and to put it in their portfolio. This could be a new application or it could actually be an enhancement to an existing one. But this is sort of a subset of the list of questions that I use to identify the best fit for today's technology. In the first set, how important is it for the system to provide the right answer rather than multiple alternatives ranked by confidence or probability? And if that's really important that you have to have one answer rather than multiple answers or to engage in a dialogue where there's a question and answer and where the system itself knows that it doesn't have enough data or evidence, then you may be better off with something that's not an AI system that'll give you a more restricted domain, perhaps a rule-based system, and be able to reject questions when you can't answer. How important is it for the application to be able to explain? Back in the 80s, late 80s did a lot of work with expert systems, rule-based systems, knowledge-based systems. And a lot of those techniques are still embedded in applications today. Those are great because they are deterministic at any given point. You get a question. You can refine the range of – restrict, if you will – the range of answers that might come next. These are the types of things if we're dealing with something like doing the tax code. And there may be things that are legal and things that aren't legal and we'll just restrict it by asking that set of questions. If you're playing games, there are things – there are legal moves and there are things that are not legal moves. That doesn't require AI. Where you want to get into the AI is when you're dealing with uncertainty, when you don't have the ability to look at all the options and so you want to rank multiple different operations. The last one here, how important is it for the application to improve performance based just on experience with the data? So you don't want to have to go in or you're not able to go in for whatever reason and update the rules. You want it to identify rules based on the behavior. All of these three questions will guide you either towards modern machine learning reasoning and understanding systems or away from them into more conventional types of systems. And as you might expect, for each of these – thinking offhand about a large client engagement where they were looking at a lot of applications, a couple of thousand requests for applications, this was part of a checklist to help say, yeah, here's where we define our own threshold. And I'm not providing that here. That's something that's sort of context-dependent, if you will. You may find that in some cases you're going to override if you're a person that's asking to do an application as well. You know, it's 50% important. All of these things are subjects of some interpretation. But these are the three key questions that I like to ask as a starting point. The next section, how important is it for the users to be able to interact in conversational natural language? And conversational is important. And this is where we get into things like chatbots, for example, all right? In some cases it's going to be sufficient just to have a stimulus response from a pre-programmed set of answers. And it doesn't matter in the application. Maybe you've got one, you're a telecom company and you want to be able to answer help requests, and you only have four possible answers. We'll send a truck. We'll stop your billing, whatever the options are. But if you want free form to be able to expand the range and be able to interpret that, that changes the type of system that you can build. So we're going to go beyond a chatbot. If you look at it and say, okay, how important is it, and there are always additional questions for this type of an assessment, how important is it for us to be able to determine that when Mrs. Jones gets on the phone with us and she's irate, we can look at her current state of mind. You know, this is a system where we're dialing in. If this is a system where we're interacting with a customer at a kiosk in our physical store, how do we handle this on input and output? You can detect emotion, tone, et cetera, from the choice of words, from the inflection, from analysis of the audio signal. You can do it from video analytics, looking at facial gestures, at physical gestures with hands and arms, et cetera. How important is it, once you've understood that, if that part is important, that you have a range of answers, and that range of responses or answers may not just depend on the current state, but it may have to bring in historical data. So we need to be able to look at that to decide whether the current, even though each piece of that technology may be mature enough, maybe it's not going to allow you to do the output. In some cases, you're better off not trying to customize the output for the input if you're likely to get it wrong, and that's a whole separate topic that we should discuss at some point. And finally, this is really important. And as I said, modern AI really has to take into account the fact that we have in general access to large datasets. How much of the data that you're looking at comes from structured data? And by structured data, I'm really referring to what I call surface structure. So it's stuff that could actually be coming in through sensors, it could be coming in from the IOT, it could be coming in from other systems. Is it internal? Does it need to be brought in from the outside and then processed? These are the kind of questions that all of these, the answers, as far as that level zero, the infrastructure, all of these are things that we can handle. Depending on the volume and, excuse me, the different sources, we may not be able to handle those if it's important to understand tone and intent. And so these are the kind of questions that we're going to go for. Okay, getting ready to close it out here. So the bottom line, if you will, for all of this is that the process dimension is still important. I don't want anybody to think that we're abandoning things like the CMM, the SEI capability maturity model. But it needs to be augmented when you're making a decision on these technologies with an understanding of the technologies and the sub technologies. And as I've given you five broad categories within AI that need to be addressed, each of those has at least two or three subcategories. And to make a decision about investment, you need to map the available technology to a quantifiable representation of the requirements. And if you've made it this far, I'm going to give you the final slide, which is an eye chart. There it is all together. So if we have these five plus one different categories, all of these have technologies that are above that utility threshold. None of them are so far out there that all risk has been reduced, if you will. But this is the way when I work with clients, I help them understand it and prioritize. And with that as a backdrop, I'm going to hand it back to Shannon and see if we have any questions. Just mentioned we've got some upcoming webinars and Shannon mentioned that I'm working on a book. I'm happy to talk to anybody about that offline too. It's looking at the age of reasoning how all of these technologies come together to provide business value. Shannon. Adrian, thank you so much for another fantastic presentation. Just a reminder to the attendees, I will be sending a follow-up email by end of day Monday for this presentation with links to the slides to the recording and anything else requested. If you have questions, submit them in the Q&A in the bottom right-hand corner of your screen. I do see a question here in the chat for you, Adrian. Can you provide a good resource to give guidance for enabling and supporting AI from the perspective of a data architect or data modeler? Sorry, I just lost my voice. Sorry, a resource. I got the data modeler. What was the resource for? The resource to give guidance for enabling and supporting AI. Oh, there's so much out there and so much of it is junk. Let me just write something up and I'll send that out when you send the slides out. I don't have one single resource. I don't want to be plugging a book here. Sure. But there are some good things out there that I'll pull together unless for it. We love resources. Anything else that everyone's kind of quiet today? Everybody's got the flu. It is going around. All right. Well, just again, I'll send a follow-up email by end of day Monday. Adrian, thank you so much for another great webinar. Thank you. I really appreciate it and we will chat next month. All right. Take care, everyone. Bye.