 I'm Shannon Kemp, and I'm the Executive Editor for DataVercity. We would like to thank you for joining the first DataVercity webinar on Cognitive Computing. Today we have a panel that will be discussing Understanding the New World of Cognitive Computing, operated by Steve Ardiri. Just a couple of points you just started due to the large number of people that attend these sessions. You will be muted during the webinar. For questions, we will be collecting them via the Q&A section in the bottom right-hand corner of your screen. Or if you'd like to tweet, we encourage you to share our highlights of questions via Twitter using hashtag DataVercity. As always, we will send a follow-up email within two business days containing links to the recording of this session and slides, any additional information requested throughout the webinar. With that, I will turn the webinar directly over to Steve to get us started. Hello and welcome. As Shannon said, we have one hour to go through a slide deck with excited about this panel, which I'll introduce shortly, or I'll introduce themselves. But essentially, what we're going to be doing is really kind of covering in a fluid network. Now, our highlights of Cognitive Computing, the analytics side, the machine learning, touch upon deep learning, all this subject matter. And we understand that this channel will be releasing this white paper that was kind of queued off of survey. And will it be published when, Shannon? At the end of the webinar, we'll see a copy of the research paper in the following email. We'll just fix on that. And so without further ado, let's just plunge right in. My name is Steve Ardiri. I'm an independent practitioner. I advise early state software startups, Cognitive Computing, AI, machine learning is a sweet spot. I've been doing it for 20-plus years. And today we have the following panel. And I'm going to introduce themselves. Tony, we'll start with you, please. My name is Tony Serres. And I'm an independent consultant in a boutique semantic technology consulting firm that I call Intusemantics. I also work as a technology evangelist for a company called Primal that develops semantic technologies. My background is originally in database technologies. I got into enterprise modeling as a result of that. And then the last wave of semantic technologies are in the late 1980s and early 90s. I got really involved in ontology and conceptual schema modeling. I went to work at UNICEF in metadata and repository technologies, and then have been very much trying to put this trend toward non-representation and taking advantage of knowledge for personal assistance and software agents. And that sort of thing, that's my primary of interest. Great. Graham, please. Yeah, I'm Jim Dillis. I'm with IBM, a company that's a little over a century old. I myself am considerably here, but getting up there in years. I'm an industry veteran as it says on the slide. I've been in IT for about 28 years now. I've been with IBM for just over two and a half years. And I was an industry analyst before joining IBM all around big data. And I'm IBM's as it says, big data evangelist. That means I speak on the power of big data analytics in business and in life in general. And I represent our huge brain trust in all things big data. I play a marketing role in product marketing of big data analytics solutions for IBM. And just as important, I am the editor-in-chief, a fairly new appointment. And as part of IBM Data Magazine, our thought leadership forum for us is to do with cognitive computing and big data and analytics. And we publish, basically it's our magazine. We have our forum now, very social. That brings together the best thinking from inside of IBM, but also from our partners and from industry influencers who would like to contribute goals to be published to our audience. So I'm happy to be on this webinar today. Thank you, Ryan, please. I'm sorry, I'm just laughing because that's a little list of tasks for James these days. I remember when you just had to be an industry analyst. I'm Ryan Bowles. I'm a market insurance firm called Storm Insights. And cognitive computing is one of the areas that we're covering. I do myself sometimes as a recovering academic. I used to be a serious computer science professor. And I was one of the areas that I worked in. And then I've taught in a number of schools. But for the last 15 or 20 years, I've primarily been an analyst and advisor to tech and to tech buyers. So I'm very excited to be participating in this, working with Steve. And I should get in a plug. I don't have a magazine to plug, but with my friends and colleagues, Judith Hurwitz and Marsha Kaufman, we'll have a book out on cognitive computing with John Wiley in January. Terrific, terrific. So we talked about the survey and what's going to be written. It's this white paper that Shannon mentioned is going to be published first that you get a copy of. But we'll share with you before we get into the flow and the presentation here some of the highlights from the survey. And so a couple of things that really stand out is over half the respondents in terms of the business perspective. A lot of the materials and analysts report and even articles are a little bit technical side. And a lot of the respondents felt that they really needed to provide a little more clarity in the business aspect of all the types of resources in terms of how do you implement these types of systems due to a lack of understanding of the business case. A couple of other things that stood out there was this notion of, well, we'll be in school with a lot of no-sequel big data projects, and now we're going to add on cognitive computing on top of that. And that's part of the end in terms of getting some of these projects kicked off. The plans, with that said, this is one of the organization's plans with respect to implementation. So you think, you know, and if it's a little hard to read here, roughly, you know, then 50% that still don't know whether it's how applicable it is because they don't have this clear understanding of the business case with the technical case. But it's encouraging that in the last two metrics there, there are, you know, there are tracking developments to begin with some type of roadmap, about 15%, and almost 15% or between 10% and 15% actually have active plans. So back to, you know, what can cognitive computing do to improve current methods of skill sets, not, you know, combination of data scientists, machine learning experts. Again, this comes up, this came up loud and clear with almost 50% stating that, hey, we have our hands full, you know, teasing in big data, no-sequel, and we haven't really constructed the cognitive computing business case, and a lot of them are already pegged with too much to do in a total of a time. So there are some things off here with addition. It's always good to have a definition of how much it came up, but in the, you know, as a, as a, as something to kind of frame to get people's back and, ultimately enough, if you take this as very good to print, it's a good, you know, healthy 70%, but interesting enough that you'll see when you get the white paper that there will be a lot of comments in terms of people's, you know, all-in-opinion. So I'm going to turn it for the first panel, you know, Ron Robbins, Jim, Adrian, Tony. We're going to get into deeper in this, but at first bluff, would it be, you know, what would you add to this or complement or what have you? I mean, I can start, Jim Cabela's here. I think that's a good start for a definition when I, when we conceptualize cognitive computing or of what's brand new here. You know, I often characterize it as AI for the 21st century or what I mean exactly. Well, un-selected data is at the core of what the applications of cognitive computing, and you've got that here, but it's really multi-structured, so this is correct. Machine learning is also very much machine learning models unsupervised, especially unsupervised, but also supervised learning too. We learn from fresh feeds of multi-structured data, very important. You've got that in here. Now the language processing is much less unstructured, you know, it's human conversation, human verbiage. The sense needs to be pulled out of that. It's not obvious to, a machine can't pull it out with the aid of some special, you know, algorithms, machine learning models. You've got a lot of the important things right in here, and you've tied the business value in as well in terms of better outcomes. It's really all about, you know, using the power of cognitive computing to find not-obvious insights within massive data sets and they're just all the time. Not-obvious insights that human beings, just using their eyeballs and their brains, unassisted, wouldn't be able to derive. So in many ways it's an extension of the human cognition apparatus, you know, our gray matter, in the cloud, leveraging all the advanced analytics that we can, quote-unquote, throw at it. So I think this is a good start. Great. Adrian, do you have anything else to add? Yes. Hi, Sarah. Hi. This is Sarah. Hi. I was just going to ask you, Adrian. What struck me about this is it's one of those cases of the elephant in the dark room, right? I think people are going to experience cognitive computing in different ways, coming at it from different perspectives. I think you've included in this definition those different ways that you can come to it. I think in a lot of cases today people see this language processing. They've experienced Siri or Google Now or one of the other tools. Maybe they work with machine learning. They're big data scientists. They experience it that way, but I think it's really the cause of all those things that is really going to ultimately get us where we need to be. And my personal bent is what do you do with it? You have that. You use it for decision-making to automate some aspects of what would otherwise be a human cognitive process to make decisions. In the role of a virtual personal assistant or an automated assistant. And for some business goal, making your job, your talent, doing more efficient or providing you more discovery into content that you wouldn't have got otherwise, recommendations, really just to help her to a human for better business outcomes. Right. Good. Good. And a number of conversations about this. I actually feel like this should have been a possible answer, which is this is pretty good, but I would take some out. I would add more. But just one more thing. My problem with the way the markets are evolving right now is, as we look at it from a perspective and everybody thinks what they're looking at is cognitive computing. So I don't have anything that has a learning from experience element to it as being kind of a baseline cognitive computing. And while natural language is important, and it's certainly part of the scheme of things for cognitive computing, I think that there's some stuff that's going on right now that doesn't use natural language. Things in neuro-magnitude or architectures, for example, or even, you know, Sean offered recently with Project Adam and Google Brent, you're dealing with a process that's learning natural language involved. So I think it's a good framework to work from. And just the fact that we can have a good conversation on it means it's good. Right. And we'll cover that downstream in the slides here. Just a quick question. I just wanted to quickly add to what Adrian said. A bit of a quick here. I liked his phrase, Learning from Experience. Yep. You know, cognition, the rational thought process are just one component of everybody's experience. There's affect. That's emotion. There's sensation. And there's some experience of abolition, I call it. There's cognitive computing, everything we're describing. There's affective computing, like sentiment analysis. Sensory computing, Internet of Things. And what I call volitional computing, things like decision automation, next best action, together we need to be incorporated into the notion of what do you need some total in terms of capabilities for a system to function from its 360 degree experience. I think cognitive computing itself is not enough to capture the entirety of what needs to go into an intelligent experience. That's a very good point. And we're going to pick up on that. One thing I want to focus right from the get-go is that there is a lot of similarities. In fact, there's more similarities in the big data and cognitive computing stack depicted by this generic. I kind of like, there's roughly, there's really, you know, it starts with your data sources, then it, you know, typically goes through some type of ETL with NLP. And there's literally the same, pretty much the same symmetry here where it starts to differentiate a little bit as the middle layer depending on what you're using. And it's really a combination of it. There's SQL, there's no SQL, there's RF, there's other, you know, object stores. What we're seeing over the last year is just a plethora of these Hadoop ecosystem, you know, Cloud support and work, platform, that are forming. And these are now transforming into enterprise data hub. You know, to be kind of, you know, bugged up against enterprise data warehouse. What I wanted to point out here and then turn this over for discussion is that your inference really starts to kick in on the other two layers, okay? So yes, you can do descriptive, predictive, and complex with big data. That's why I include the prescriptive and cognitive. If you go from left to right, then you're really going to see some of the, you know, and we're going to get into the differentiators, believe me. And then it's different types of applications, just like IBM and others. There's different UIs depending on what vertical and who you're addressing, whether it be clinicians, knowledge workers, consumers. So comments, Jim? We're about to start off. Yeah, Jim, again, I like this. You like the fact that at the very top, you put engagement. That's, you know, not a component of the big data because anybody conceives it, but it's fundamental to the cognitive computing stack. I mean, when people think about cognitive systems like IBM, Watson, or movie versions of some things like that, like, you know, everybody knows how, but, you know, theory. It's all about engaging human beings in a conversation around the, you know, however conceptualized. So it's all about engagement, too, to do direct decision support and guidance to human beings trying to make decisions in various contexts. And then if you put engagement at the top layer here, I think is bang-on correct. You know, everybody knows this week that IBM and Apple entered into an alliance going forward. And everybody, you know, one of the shorthands of the journalists uses, you know, how will Watson and Siri play together? And, you know, that really in many ways is the things that we are grappling with in terms of both doing conversational computing on top of conceptual computing and on top of cognitive computing. The three C's here. It's exciting. You know, in other words, how can Watson or Siri or any other cognitive system, automated system, pass the Turing test? Yeah. We're going to talk lively about that. Right. I'm glad you mentioned that, Jim, because I actually have some slides downstream that really get into what it does on the upper two layers after we cover the symmetry and the similarities between big data and cognitive computing. Jim, comments? I want to say that I agree 100 percent that I think it's about what you do with the big data. And I think that's where the complementary relationship between the two comes in. I think we're so focused in calculating data and analyzing it with machine learning tools, but that is a domain that's very much left to data scientists or within the tools, maybe exposing it more to people in the business. But I think one of the things that we're hearing from people that want to know what the business value of collecting and analyzing all that data is, and I think that is adding it up to be exploited by cognitive technologies that can now take advantage of that knowledge that's been mined and the next step made explicit so that it can actually be used by predictive systems or by cognitive systems. And I think the whole U.S. and application API piece of that is out there at this point. Yeah, exactly. So the interesting question, and this is where we want to kick in, now we have to understand the symmetry and the similarities. What are the key differences? So I took a first chart with tongue-in-cheek taking gaping void here on the differences in short. So big data is more in the information and then cognitive computing where you're adding content associations really is more along knowledge. That's why there's a Google knowledge graph and there's Watson and so on and so forth. But more specifically, what I think is out in my opinion, here's the three key differences of cognitive computing. So I'd like to turn this open to the panel. Well, it's a question of taking advantage of all the knowledge that's sort of implicit in big data. It has to be made explicit. Patterns that are getting exposed, what do those patterns mean? How can we use them in a particular business context and why are they valuable to us? And I think that's the last layer where we're now tasked with adding on and making explicit to people so they can understand what it does and how they can use it. Right. I don't think I'm making some notes on this because I don't think you can take that slide in the past. I like it. You know, I look at learning and I say that learning is the fundamental thing for cognitive systems. The three words that generally come up are pattern relationships and context and they're all related. Going back a little bit, they're related in, like, I mentioned before, stack going from, you know, data to information to knowledge and if you really wanted to get people worked up, you'd talk about wisdom. But here where we're looking at is, you know, how do we take something, how to extract something of value out of a lot of data? First of all, finding the patterns, finding the relationships, putting them in context around it so that it can be used for business decisions or for research decisions, whatever the app is. I think when you go into inference, hypothesis, adapting and improving, that's the learning part that we're talking about. And so I really like to focus on your third bullet point there, that that's what I was looking at in terms of experience-based performance that doesn't require reprogramming. So, yeah, I think you got it here. Yeah, this is really nice because I think you hit the nail on the head. What's differentiating now is that there's so much data and there are different patterns that might be potentially findable within the data but there aren't enough human beings on Earth who can be trained to be data scientists. Even if somebody on Earth was given the most powerful data science tools available and everybody went and got a PhD in computer science or, you know, STAT or whatever, you still wouldn't have enough resources to find all the valuable insights. So what I'm getting at is that automation is absolutely essential. The models themselves will learn from fresh data. The models will find the sense the models will adapt, machine learning models and so forth. Well, that's the need for direct reprogramming here, really direct modeling and simulation. Automation is absolutely essential for the human race not to get swamped by all the data coming in. But also to drive not only to find the insights in an automated, it is close to an automated fashion as we can make, but to find those insights in a fairly automated fashion downstream into all the applications or decision points where they may be needed without the need to go and write any code by any human being. It just happens mathematically or mathematically in the infrastructure. Well said. Well said. And with that said, we're into some of the building blocks that has led, and this is you again, Jim, regarding how Pog's computing can take the semantic web to the next level. This was the post you did earlier this year. Can you elaborate a bit of that? Sure. I'm charging a nickel every time somebody cites me. You know, like, at least my Indian, but yeah, so the semantic way, no, actually, you know, a big part of my job at IBM is I'm, you know, I publish blogs every week and various channels including data diversity. So, in January, I was thinking about the very topic of AI for the 21st century, and I was thinking, okay, what's missing from general discussions or specific discussions of computing to make this reality, AI, and how do we normally perceive the branches of AI? And clearly, semantic web and semantic analysis related to natural language processing and so much more has been discussion for a long time. And when you look at, you know, finding the sense in the unstructured content, when I say unstructured in this context, I'm referring not just unstructured text, but also especially to media, you know, audio and video and so forth. What's absolutely essential is that as you extract the patterns, you're able then to tag patterns, the data, the streams, you know, deepen the metadata that gets associated with that content and ensure that metadata is downstream to all the consuming applications so they can fully interpret all of the content of those objects in their full, whatever the relevant context is, I thought, gosh, the semantic web people have had those standards and those technologies for a long time in terms of all and RDF and ontologies and taxonomies that need to be brought into the overall cognitive computing discussion. It's a key part of what I've elsewhere called thick metadata to enable semantic computing as an integral component of cognitive computing because when you think about cognitive computing, many people who aren't really experts in this area, they think of structured data, they think of just, okay, just more of the decision automation and unstructured data, like standard core enterprise business applications, but most of the new applications of cognitive computing are completely unstructured sources, but where the semantics is not defined in an art structure or whatever in advance, it needs to be extracted from the content and then mapped into a static ability of one sort or another and managed in repositories and so forth. So far as cognitive computing to achieve, it's promise. We're going to need that thick metadata layer that incorporates semantic tagging formula for us. That was the germ of my thought there. That's what I've said. And a couple of instantiations of this is, of course, Google's acquisition of MetaWhip a few years ago that is forming the base of their knowledge graph. And there's a lot of graphs, but that's essentially a strong example of that. I'm going to give everybody a shout out here after perusing your Twitter stream. I kind of like this depiction, Adrienne, of this tweet you did in June regarding how IBM Watson, you had a lot of interaction with them, how the functions layer maps to the cognitive computing. Can you just give us some of your thoughts on this? So what distinguishes cognitive computing from data mining and machine learning? We're going to cover machine learning in a lot of detail right after these series of slides here. But on data, did you want to take the data mining question, Jim? Repeat the question. Yeah, we're going to cover machine learning. But the analogy between cognitive computing and data mining? Well, data mining, of course, is the process of finding sense in the field of computing one of its core applications. Not the only one is defined sense in multi-structured. So there's a direct analogy. So I'm going to talk about cognitive computing in a broader context. It's finding sense in, really, it's geared towards finding sense in the unstructured sources, the multi-structured sources. So in many ways, you can look at it as there's data mining, there's text mining, there's sense mining or semantic mining. You can use the word mining. You could, you know, qualify a number of ways. But the thing is, you know, when you're talking about cognition here, you know, what you're finding it for is senses that you can then graph out. You can graph out a model of one sort or another. You know, as opposed to say you can graph out the relationships in an affective context as well, in terms of sentiment or patterns of influence based on, you know, people's feelings about that topic. So in many ways, it's just the broader concept of mining for meaning. Meaning of meaning. So do we have Adrian back? I'm here. Okay. I have a couple of elaborations on your diagram here as far as the foundation maps to cognitive computing or some of your permutations on that. Okay. In fact, I just, so I just pushed the button and the updated version of this will appear with the hashtag of cognitive computing. This is the first stage to talk about ACE for cognitive computing, you know, with the foundation technologies of data and analytics. And of course, you have blowed out a lot of things with hardware and work loads, workflow and architecture. But what I wanted to get at was the idea that learning is kind of the central thing without which none of that really matters in terms of context. But for those of you that are looking at the slide on the webinar, actually kind of made an overlap between perception and learning to account for the learning that goes on outside of natural language processing and outside of the time that we're talking about with Watson when you get to Google Brain and Project Atom because that's what I said earlier, learning based on experience, but there's no language involved. What I think that I've been putting or promoting lately is the connection between structured and unstructured has bothered me for a long time because my feeling is that if it's really, if there's no structure, then it's noise. It's the fact that we haven't identified the structure yet or able to extract out the patterns within that structure. So if we know that it's natural language, if we know that something's written in English, and we have reasonable that it's actually valid English structure there, it's just not something that the surface level like you would have with data. That's an excellent point and we're going to pick up on that in two slides here. Before we do that, Tony, a couple things that stood out on a couple of your tweets, and I kind of like this, this whole notion of non-concentration and design serendipity, which is another fundamental part of the differentiating between computing and just being able to do that. Can you elaborate on that? In the blog post and the emphasis on the fact that it's very long on the A, the artificial part, and short on the I, the intelligence part, I was really trying to say there that I think for the people who declared victory in AI, basically because we are getting quite sophisticated in machine learning techniques, we've got more and more of this big data everywhere. It's premature to declare a victory and I worry about the same sort of hype cycle that happened back in the late 80s and early 90s around the last wave of AI and it didn't come to fruition. So I think we're closer now. We've got better technology, but I was trying to make the point that exploiting this representation isn't really the same as just doing text processing or natural language processing, and it's even not the same as doing machine learning where you expose patterns and you try to make predictions based on those patterns. That's a step that can enable her, but I think you really have to take the knowledge and encode it in some way and make it explicit and readable. I think there's a difference if people are familiar with the notion of tacit knowledge versus explicit knowledge. So I'm a cook. I can make a recipe and just kind of do it without even thinking or reflecting on it. But if somebody asks me to describe how I make something, it's very difficult for me sometimes to describe that process and all the little things you just do about how much you have been able to put in and when you know it's right. That's what we have to get to to be able to have these cognitive systems and systems that can really act at least semi-autonomously. And that's what you mean about designing serendipity, to be able to kind of meld implicit or tacit with the explicit. That's an excellent question. Yeah, I worry in that case about we talked a long time ago about the filter bubble that was coming up in search, right? And the fact that people would get down deeply into seeing the same sorts of things that they found before for the topics that they put into search engines that they would never discover other things going on around them. Well, I worry about the same thing with machine learning that we're dealing with all the time that we're just going to reinforce our existing knowledge and existing biases. So I do think we have to regain things in different ways. We have to be able to move up a graph if there's a up in a hierarchy and yes, so we have to be able to kind of explore their path through the graph and then to play other aspects of knowledge that might not necessarily be obvious, but that's where real sort of creativity comes in and where we begin to get to things that are more unique versus purely machine processing. But I think we're ways to wait for that. You have one more point, Adrian? I'll make the point later on. As I mentioned, one of the key layers in the previous diagram, the five-stack layer, we're going to concentrate on the machine learning, the reasoning, and then the user engagement. So machine learning is a branch of AI. So the algorithms are processing the data. They're drawing the conclusion. It's pretty much broken down to two buckets with different methods, you know, supervised and supervised. And this is not meant to be thorough. This is the typical, you know, algorithms and what the goal is in terms of doing this. And we'll get to deep learning in a bit, but the recent, the quoted part of them was when you really think about it, you know, you say, well, you might have thought about it, but when Matthew Zeller made this, you know, in this Wilder article, Google is really a machine learning company. And a lot of companies are turning it in. You know, you can make the same thing with Facebook. And in Watson, there's a lot of machine learning. And it's really interesting to, to, it's really pretty much the new black machine learning. On the, on the, on the, on the spot, is, is basically taking, you know, replacing map reduce. So getting back to the symmetry between machine and cog-tube shooting, the Hadoop distributor has made Spark part of their distribution. And it's, when Google says we're not using MapReduce anymore, well, no foolin'. No one else really is either. It's moving to Spark. And it's also being applied to other data stores. Cassandra, MangaDB. And, and, and, and, and, and, and, and, and, and, and, and, and, and, and, you know, and the E.T.L. And, you know, and for complex sufficient data that's what I, you know, in this diagram here is that, for screen data, it's, it's the performance of like 100x, you know, MapReduce. And you're starting to see it to applied for use cases and churn, you know, fraud detection, analytics, and so forth. I think there was a conference this week, this is our conference. One, this was a pretty interesting post, a characterization of this depiction here from GraphLabs. Actually, it was the GraphLabs conference this week. Kind of just takes you through this automation process of the data sources to the ETL and then, you know, a batch of static dynamic models to be able to make these predictions. One of the things that stood out to me is, right now, you know, my curve is better than your curve demo. They think, you know, this metric was pretty interesting. 80% to 90% of the use cases can be in the cycle without requiring centralized deep engagement. Or, and we're going to get into this notion of, you know, if you can automate a lot of this, then you can, you know, I guess get closer to the goal of democratization machine learning. Comments from the panelists. It will be democratized in the future to the extent that, literally, it's available everywhere, ubiquitously at lower, no cost. And they'll have a machine cloud computing on the back and it'll demand mobile and really, you know, any form factor client on the front end have access to machine, the outputs of machine learning models and so forth. But also democratization of machine learning means the tools for building and tuning machine learning models need to be so, I'm going to use the word foolproof, need to be so embedded in the experience of developing apps or, you know, knowledge or whatever that we don't even realize that we're doing it. Everybody's doing it. Everybody's sharing their knowledge. And to, you know, to tune, fundamentally, machine learning is all about, machine cognitive computing is all about machine learning, assistant learning and human learning, the judgments of experts and just regular people. The crowdsourcing of intelligence improves machine learning. You know, you continue to adapt the models to the results. So democratization will have been, will be a reality. When all of that has happened, and we're nowhere near as an industry, you know, ubiquitous machine learning on the back end, ubiquitous on the front end, contemporary versus Watson, you know, spectrum, as well as, you know, real-time user-friendly, interactive, visual development tools. If for a reality today, we're using it to develop, you know, Hadoop, MapReduce models and so forth, but even though Hadoop, you know, has, even though it's the biggest of the new approaches to big data, has achieved nowhere near, anywhere near democratization in that sense in terms of broad applicability yet. Yep. Yep. And it's just a time I'm going to move on. What I wanted to point out from this slide is machine learning is in new black. There's a lot of dollars flowing. This is a partial list. There's a lot more. But this gives you an idea of the, you know, very old start-ups that are getting, you know, 20, 30, 40 million, 50 million. But it's the money starting to flow big time. And there's a combination of different methods. Some of these are deep learning plays like vicarious and others are using some of the methods. The newest, the newest big player to the group was the announcement of Microsoft as your machine learning. And that's a significant effort as well. You know how hot machine learning is? Even an old, a grave like Bill Gates, somebody of my generation who has nothing to prove, has built, he's the richest man on earth. He's this great philanthropist. Even he, I can, you can read interviews with him recently. It's like, God, if he was going to get back in the game, he says machine learning would be it. You know, he'd focus on all of that. It's like, even if somebody like Bill Gates is getting re-energized by machine learning, you know it's hot. You know it's actually hot. That's a valid point. That's a valid point. So, getting back to what we just touched on and, you know, this whole notion of automating the data science. This is a provocative post by Louis Derrug talking about, and this reinforces what you just stated in previously Jim regarding, you know, making up a data scientist. And his, he was really emphasizing the, they really need to know machine learning because that's really the fundamental key of what we've been talking about in previous slides. Comments? I love something else. I've been on a run again to get out of data science to see what people are doing. Yeah. Right. So, this really boils down, so this really kind of segues into this other kind of interesting post regarding algorithms. I mean, again, you're always going to have teams of data scientists, and there was a question about moving forward with, in the white paper that you'll get at, will you need for more data scientists for cognitive computing equal number, or less than I forget how the metric came out, but basically, this also has the sophistication of the algorithms, you know, can, if you're embedded into semantic models to do what they're doing, you can not have, you know, the algorithms, more sophisticated algorithms becoming, you know, the power designers. That's the whole purpose of that slide there. The interesting thing is that, you know, I think automated algorithms are really good at generating prototype after prototype, or iteration after iteration. You'll still need, and that's the notion of explicit knowledge. It could be automated, the generation of it, because of the patterns. You still need the tacit knowledge of human beings, either subject matter experts, or outsourcing just the average person to then feedback, no, that model doesn't, that pattern doesn't make sense. Yes, that does. That's closer. To feedback, are human judgments continually into this automated system that's producing one pattern after another to help refine the models to make sure they're still on track with what? The overall consumer or stakeholder, human beings experience it. So, you know, like I said, machine learning and human learning need to be really co-dependent. It's a co-dependent process of continuing to iterate or refine those. I totally agree with that. I totally agree with that. It is a co-dependent process, but this is an interesting observation by Vinod Koshla, Three Predictions for Future of Health. So, he's very much, you know, maybe it's a little exaggerated, but it's a pretty high number of 80% of what doctors do being placed by machines. You know, the notion of, to your point, still tell or made, that whole human-to-machine interaction, and that whole consumer type. You know, similarly, you know, the IBM, you know, a couple of them with Apple, which we're going to cover here. I'm going to make a cut out on deep learning, because this is the idea, it uses multi-linear neural networks that are teaching, you know, and we'll talk about, you know, the Google Brain project and then, you know, Microsoft's answer to it. But there it is, Project Adam. So, it's a beach slide to a number of things. The beach is very big. So, you're seeing a move from, you know, Microsoft has it, Google has it, Apple is doing it right now. We talked about the images and we're seeing the improvements already. You know, the voice condition in Android, Skype transit will be coming out later for real translations. Pretty impressive, if you've seen the data. And you can just see in the images, these are pretty interesting metrics. Humans can match 97.53, but Facebook beat it, or worked close to it, but then the Chinese University of Hong Kong has a 99 with a classifier neural network. So, the whole point, and now that I'm in here, we've got the latest models. This is work that other companies are doing, like Dumeta. They said after nine years of research, they can mimic the way the brain works and that plays into continuing iteration of cognitive computing. Now, what we need to get in the interest of time is just 16 minutes left. I want to talk about the top level, the UI, the application engagement. So what I came up on here is more about having, you know, the typical BI, you know, user interfaces. With cognitive computing, my contention is that it's contingent upon the vertical use case and who's the target of user, whether it be a clinician or knowledge worker or a consumer. And I just did, okay, for licenses, you can, you know, when you're doing topological graphs of different cancer types and shapes and meetings, these are cut out from a yasty. It's that way for collaboration, something, or if it's customer service, maybe it's that intelligent personal agent. Comments from the panelists. Yes? This is Chris. I do think back to one of the, I think, cognitive computing that was raised, you know, we make a business case around it. I think this sort of thing really does begin to get to the real use cases, whether it's augmentative in a decision support sort of way, helping a doctor with diagnoses, content discovery, you know, personal assistance, product recommendations, all sorts of things. And I wanted to raise the point, we spent quite a bit of time there on machine learning. I don't equate cognitive computing with machine learning exclusively. I'd like to think of four approaches and you can sort of slice and dice these different ways to getting to cognitive computing for semantic technologies. And I think constructive ontologies, constructive knowledge modeling, like whether it's done, you know, by groups or individuals or crowdsourced in the semantic web. The linked open data models are very valuable ontologies that can be used for cognitive computing today in practical terms. And that comes from that. Machine learning, if we move from just using the patterns to really analyzing the patterns and building explicit knowledge models and doing that iteration that I think Jim mentioned where you have to have that feedback loop and you begin to really develop the data that you're producing to create explicit knowledge models. There's what I like to call generative or probabilistic models, I think as well, that can be for particularly everyday sorts of tasks where you don't have to have high accuracy but you want to look for long, disruptive sorts of things. So for content discovery, for personal assistance, for calendars and just everyday sorts of tasks. So I think we have to bring in to bear a lot of different approaches and based on the use case that will dictate the approach and how much we want to invest in that and what the business return is on it. Yeah. I like the slide because you've got the top of the map of 14 cancer and oncologist, a doctor who focuses on cancer and diagnosis and treatment would use these kind of these kinds of maps. What I'm getting at is that if you look at targeted users, like for example, oncologist trying to diagnose those cancer or whether it's likely to spread or whether it already has spread. What they're doing is that they're saying if they should be doing this, they should be a scientist. They should have a scientific approach, meaning they're gathering more evidence. They're weighing it against their prior hypothesis and they're saying what do I know and not know with what degree of probability, very probabilistic, hopefully the cognitive computing application will give them guidance on going throughout the investigation, gather this data, try this treatment, see if they respond so forth that the backend system, whether it be Watson or something else is irrelevant, should then allow that scientist essentially, the oncologist to get closer to a confirmation of some hypothesis potentially that gets deep down to the factors that they're looking for so they can treat it effectively. So that's very much, you always have to provide the guidance geared to the specific decision points being confronted by somebody who hopefully is using a scientific, critical data-driven framework for decision-making. It doesn't have to be an oncologist or somebody with a high degree. It could be a consumer. There were consumers there. It could be a consumer who is just looking for the best buy, to borrow whatever in the marketplace. So there are many different kinds of products that might meet their needs and the advisor would help them work through the decision tree in terms of finding the best product of what sort to meet to do with whatever outcome they're trying to achieve. Hopefully doing it in a more of a scientific manner so they can have great confidence that the answer they arrive at is pretty best or close enough. Exactly. So an interest of time and a move on. We do have a question from the attendees. Here really quick before we get too much further. Do cognitive computing and semantic computing still need a soul behind it to make decisions? Which in semantic computing deals with the storage of decision paths for automation? I'll answer it and then I'll let the other panelists answer the second. Yeah, cognitive computing needs and semantic computing needs soul, meaning you need human judgment to do the modeling, but also to evaluate the outputs of the models to make sure that they are on track with whatever humans happen to be experiencing, whether it be edge humans or just average humans. So yes, they definitely need a quote-unquote soul, human judgment somewhere in many places in the overall process of building and tuning and using the output models. In terms of this one, what branch of semantic computing deals with the storage of decision paths for automation? I don't know. Yeah, that's a better clue to me. It may be in the follow-up. One thing I wanted to get across is the UI for mobile. You know, when you're bringing, and by the way, there's another Watson. It's called AT&T Watson. It's a speech platform. That pans out regarding, you know, trademark. Yeah. Yeah. Absolutely. I can't wait to ask without throwing out an alternative view and a lot of what we're doing with computing, I would say we don't need a soul. I don't think that in the diagnostic area in some of these things, there's nothing to do with what we think of in terms of humanity and the nature that views present because there's a lot of stuff going on now in terms of the nature that these robots take over. Right. So the reason I teased it again, I just used them. I mean, you know, IBM is their own speech program. There's Nuance. Each one of the big companies. But at what point here is bringing in other modalities, you know, bringing in, you know, I mean, the whole notion of, you know, automatic speech recognition, gestures, emotions. There's a lot of other parameters regarding the diagnostic management in addition to the natural language processing. So the point of that, a lot of this is being rolled into Cortana Bing. See, that's Watson. And then of course with Google, with Google now and the Knowledge Graphics, there's a lot of other types of, you know, parameters that have to be factored into the scheme, right? Which really, in big data, this is again the different, you know, one of the differentiators, is there's all these other modalities. Okay. And speaking of it, just the news like with, what's kind of interesting here is now Watson and, well not just Watson, but IBM and Apple. So this is sort of like, really interesting to where you can actually have intelligence at the interface, combined with intelligence on the back end. And you can see here, really, that I teased that article, is that there's, you know, very specific interfaces depending on what the use case is. And then you have a general assist, and I couldn't resist when I saw this news, of bringing in the old concept piece, the Apple Knowledge Navigator from 1987, which really is a marvelous piece, by the way, in terms of, and I remember this, the deforestation in the Amazon versus deforestation, that's why Africa's over there, where you're really, you know, it goes way beyond the capabilities, what we see today in Cortana, Google, now in Siri. Well, Watson, of course, was just adapted to speaking to Alex Trebek in a game show setting, you know. So we've gone well beyond that. Yeah. Yeah. I think you're, you know, I'll go ahead. You know, we're really, in terms of IBM Watts, and we very much build applications for specific applications, like healthcare and so forth, that use very specific interfaces to those decision points, and so forth. So, yes, that's great. In some of these areas, more of like, you indicate you're a general assistant, you know, going forward, IBM's going to tune Siri and other Apple and IBM technologies to the specific conversational computing requirements of different decision points. So I think there'll be a blending going forward. I'm not speaking on behalf of IBM. This is just my feeling personally. There'll be a blending of sort of the general capabilities that a Siri provides in almost any decision environment with the more vertical capabilities of Watson for particular applications and so forth. It's extremely likely to happen. Let me just say it that way. Okay. So, again, the competition is intensifying. Again, it's kind of a pull-through, but basically, it ends in, you know, there's start-ups as well. We talked about, you know, machine learning start-ups, but it's interesting how we're, the later IBM also acquired the start-up cognia to get personality to virtual assistants. Cortana is in the game, and that quote from the manager is, we decided to infuse with a personality for better use of attachment. The good news, you know, Google is talking about full reasoning outside, but, okay, it's still a few years away yet. But we're starting to see quite a clamoring and an aggregation of movement towards adding other types of, you know, amplitude and other elements to, you know, combine with more functional AI along with personalities. Do you have a comment? This is Tony. I was going to say, I don't really care so much if the guy or the cognitive computing technology personal assistant has its own personality, but I do care that it understands my personality. So I think that the text will convene peace that needs to be around here, that it knows where I'm at, what time it is, what I've been doing recently, what I'm likely to be doing next, sort of my objective and goal, what my interests are, what my social network is, and it has all that context. So it knows that me and it knows what I'm doing, whether it's personality or not, I don't really care. Right. It's very likely to happen in the industry as a whole, as wearables, because I'm adopted more widely. I bet that the individual wearables within your personal area and network will have their own distinct personalities. It'll play together as almost, okay, your own personal game to some extent. Hopefully they'll all be tuned to your personality, like a lot of you were just saying, but, you know, it'll be a society of wearables, each with their own distinct personalities, hopefully playing together harmoniously. Yeah. Yeah, I've written a comment in the interest of time, so we're down to four minutes. Oh, yeah. One more question here from the audience, Steve, and we actually have just a couple of minutes left in. For a knowledge system to be termed as cognitive, what are the mandatory characteristics and features? As you guys were discussing, you guys... Well, actually, the slide upstream, you know, talk about those three, you know, basically what I would say, we covered that in the previous slide, so without belaboring the point, and let's finish up the next three slides in a minute here. So the whole notion of, of, emotional ordained to two is my point in this slide. So there's work going on. This is a laboratory for animal technologies in Auckland, where, you know, you can learn and interact in real time. So you've got some of the models, you know, baked in, to where you can actually, you know, create much more like, you know, personalities that, that can, you know, that can also emote and learn about you, giving to some of the previous points. Now, we're seeing this in robotics. Now, this is kind of interesting. I was, you know, I just picked this up. Edmonton Airport is using robots. Somebody said, well, it looks like an iPad and wheels, but the whole notion is there they are. Rather than going to the information booth, you can go to these robots and they'll be able to help you. And this company, Jibo, and Aldeo, this is Cynthia Brazil from MIT. She just, she just goes five million from the Indiegogo campaign, you know, and putting things out. It's actually quite interesting in terms of being that, that personal assistant. The north time to cover, which we'll just touch upon here, is bringing cognitive capability and noromorphic nor syntactic chips right into the chip set. Okay. So, you know, this is not meant to replace traditional CPUs or GPUs, but it's going to be complementing players that are mentioned here or working on that. We're not going to have time. We talked about whereabouts, but the whole notion of the industrial internet, this consortium that, that with major players is kicked out for the Internet of Things. That is a lot, you know, that's going to be 10 times in terms of machine learning. And we don't have time to cover cognitive computing potpourri here. But it's being applied to, you know, to data centers to be able to look at photos to diagnose, you know, diseases in, you know, things like that. And the observations and the close here, let's just talk about this, is, we did talk about it. It's not about replacing humans. It's the collaboration, the combined strength and common final comments from age panels to really close. Make a move. Turn around and put it all into one hour. Yeah. Coming forum in San Jose next month. I'll be there. I'm sure the other guys will as well. Yeah, sure it is. Yeah. We have some of you at, at this forum, which is August 20, 2021 in San Jose. Wouldn't miss it. If you want to close, close the meeting, San Jose. Thank you. Thank you. Thank you. Generally, it was a fantastic discussion. It was really, just really impactful of a lot of information as you said, you know, there's, it's so hard to get to everything in an hour, but you guys really covered a lot. Very, some very good points in there. And thank you to the attendees as always for attending and for asking your questions and joining in on the conversation. Always very appreciated. And as Steve already said, we hope you will join us at the Cognitive Community Forum, August 20th and 21st in San Jose. Hope everyone has a great day. And thank you so much for participating in today's webinar.