 Good afternoon everybody. Welcome to the Open Group Singapore Conference. This is AI Workshop Transforming Processes with Artificial Intelligence. I am Andrash Sakal. We are both with IBM. This is Michael Flores. Hi. We are with our Public Sector CTO office. We're here to talk to you about how to make AI relevant to your business. So a lot of information being thrown at you in the media, maybe even by vendors, maybe even by my company occasionally. That's kind of hard to decipher how you actually make AI relevant to your business. If you're here today to learn about neural networks of all different types, or TensorFlow or Caffe or anything, I'm sorry you're going to be disappointed because we are intentionally not going down to how you actually do that at that level because, and we'll talk about why, primarily because that's not really at a level that businesses, the average organization is going to implement AI. Now, of course you're going to have Netflix or some other company like Google implement AI, and they're going to go soup to nuts, and they're going to use some of these other AI frameworks to do that. And we have quite a few offerings that are based on TensorFlow, Caffe, and so on and so forth, but that's not really the kind of AI that we're going to talk about today because we want to make it relevant to the business. So Michael, you want to talk about the agenda? Certainly. So as Andras mentioned, today's discussion is going to be focused on giving you enough insight and context around AI so that when you come across a business problem or have one of your favorite C-level clients say, tell me about AI, I want to do AI, you'll have tools and insight that allow you to have an effective conversation, that allow you to work with them to build a prototype, a POC that will be successful, and that will allow you to use architecture the entire time that you do it. So we're going to go from the basics of AI, try to establish some real-world definitions, we'll share insight around how do you get started with AI and business if you've never done either. Then we'll talk about some of the best practices that we've come up with based on repeated customer experiences. We'll then take a minute to showcase some actual new AI applications that we've built over the past few months showcasing AI being applied to the business of the open group such as Togath and the open career framework. And finally, we'll take a minute to share some great lessons learned at great cost, insights gained from some of our most complex but still successful implementations. So Michael and I, especially Michael, has been working with our customers to implement AI-based applications and has been through the trenches in the war with what that means. So let's talk about the basics and we'll start from the beginning with an introduction to AI. First off, AI is not machine learning. Machine learning is really all of the algorithms that are used as a broad landscape and framework to implement artificial intelligence. This is a great history. I love to show this because actually Carrie and I went to college at graduate school at the same time and he mentioned that the fact that he had a concentration in AI and I did too and both of us were kind of talking about the fact that we went to school, got our degrees and came out and crickets with respect to AI afterwards, right? There's no talk about it up until the last five, six, seven years, maybe even when Watson won Jeopardy. So this is a great chart that shows kind of the beginning with Turing talking about the Turing test and complete lists and NP complete processes and the creation of the original Minsky neural net which is actually still quite valid today. And then this group of folks thought, well, in 56, we're just going to go off and work for a few weeks. In fact, they allocated themselves two weeks to start this group to come up with an artificial intelligence framework. I thought that was kind of interesting that they gave themselves only two weeks to mimic human intelligence. So they found it was a little bit more difficult. You started to see algorithms play checkers, then you came across semantic networks. You had the first chatbot, and we'll talk a lot about assistants or chatbots here because we actually implemented one for the open group demonstration in Eliza. And then we went into this AI winter where not much happened. Then you came out of it. You started hearing a lot about expert systems, predictability. That was really more about analytics, quite frankly, prescriptive analytics than it was AI in my mind. And then the second winter occurred up until the point where you and I went school, and even then afterwards, not a whole lot. But then IBM used learning algorithms, machine learning algorithms to beat Kasparov with heat flu. A lot of interest and vitality went back into artificial intelligence there. Then still, from where I was standing after that, there was a lull in interest in AI. And then the DARPA Grand Challenge, in around the year 2000, actually we started working on Watson and thought about what the Grand Challenge might be. They were sitting in a bar and Jeopardy was on TV. And the scientists from IBM Research said, well, why don't we build an artificial intelligence, a system that can win at Jeopardy because that is a very complex game. I'll give you an example. One of the winning categories was British television. And the question, or rather the answer, if you know what Jeopardy is, you're given categories and then answers and you're supposed to figure out what the question is and answer in the form of a question. The answer was this time machine appeared on BBC television one day after Kennedy's assassination. Anybody know the answer to that? A TARDIS, that's absolutely right. Dr. Who, you gotta be a Whovian, right? I love Dr. Who. I like the new series too. So anyway, you had to build a machine that understood the semantics of what it meant to answer in the form of a question. You had to build a machine that understood natural language processing and the context and the sentiment in order to actually win that game. And it took them quite a few years between the time they were at the bar and they said, hey, let's build a system that actually does this to the time when they actually went on television and won against two of the top world players. And by their way, there were a lot of simulated, they built a whole Jeopardy simulation in IBM and had many opportunities to play the game against just average researchers. You could sign up to play the game. And part of artificial intelligence and machine learning is actually getting that data, getting the reference data that allows you to actually train your models and learn over a period of time. Well then, fast forward here, not so long ago, Google starts playing Go, which is one of the most complicated board games which has billions of different permutations. And really to IBMers, we were like, that's not all that interesting. Why? Because it was an extension of what we did with the game that played chess and the Kasparov Deep Blue. The only thing that they did was really interesting was they really came up with back propagation and kind of these deep learning networks that learn from learning. They turned the machine around after it learned the basic skills of Go and they had it play itself or play another version of it until it actually learned all sorts of new patterns that nobody had figured out before. So that was the innovation there. And then AlphaZero, which was again remind me what was the difference between AlphaGo and AlphaZero? Well I think a lot of that is representative of the iterations of the learning pattern used. In this case a system, as you mentioned, would play against itself and they'd have a modern version of it, say version 9 playing against version 8 or 7. And this, as Andros mentioned, is a very interesting and cool thing. As we'll talk about many of the systems that exist today, you have to sort of hand feed it or create your own way to feed it and update it. So this idea of having a system that could simply learn from itself without human intervention is still one that's noteworthy in the future. I suppose we will. I think that's actually the really cool part about artificial intelligence. So machine learning is really about the algorithms and the frameworks. AI is about trying to mimic human intelligence. And from IBM's point of view, it's more about the human in the loop or the aiding the human versus replacing the human. For a lot of different reasons, we don't believe that the singularity, you hear this idea that the singularity is an autonomous sentient personality. We don't believe that's going to happen anytime soon for a lot of different reasons. But this particular AI timeline is really about taking all these machine learning algorithms and so on and so forth and then turning it into a fit for purpose solution that started with deep blue. You see Kismet up there. I have a Roomba. My wife gives me a lot of crap about Roomba because it really is pathetic. I liked it. It was fun, but my wife's just like, how come your robot isn't cleaning over there? Siri is the front end of the chat bot really is what Siri is. And back end learning algorithms and you had Watson, you got Eugene, the use of Alexa, which by the way is a series of frameworks. Some of them are AI, some of them are not. Recently, I think Tay was in the news, right? She's making Southeast Asia tour right now. You may be thinking of Sophia, the robot. Sophia, okay. They look the same. I mean, isn't it the same guy that developed them? I don't mind me. Tay is an interesting example though. Tay showcases what happens if you are optimistic about a system being able to learn from everything it sees. For those who don't know about Tay, Tay was an artificial intelligence chat system that people could interact with online. I think it was on Twitter. And the system was set up in a completely naive, completely impressionable mode. Basically, it learned from every single interaction. So humans being humans and humans behind keyboards being somewhat less than that, you ended up with a system that had learned a lot of really, really, shall we say, rude things. But though Tay is a fun example over social media and vulgarity, the lesson from Tay is still relevant to today. Any system you create that's based on AI merns from the data you give it. So in more complex examples, that quality of that data is going to be one of the most important things with your AI system, just as much if not more so than your algorithm itself. Right. So here's your take-home definition for AI. It's the theory and development of computer systems that mimic or perform tasks that are normally required human intelligence. We like to think of it as the human in the middle or the human in the loop because most of these AI solutions are really fit for purpose. So some of the AI solutions that you see out there that are relevant to today, speech and vision, natural language processing, natural language translation, image processing recognition, categorization, machine learning, so being able to actually learn, you know, create a framework that actually learns particular patterns. One of the things that we have been doing is working with NASA to use drone images on rockets before they're launched to determine if there are failure patterns. Normally it would take engineers to walk around the tower, and then as you know, they have to eventually remove the walk tower from the rocket because you're going to take off. But a drone can be there until the very last minute and pick out anomalies at pretty much machine speed if it gets enough patterns of failure categorized. Expert systems and robotics. My lab has three or four different robots, and, you know, Michael and I just really love these things. They can be such a pain in the butt to work with. Everybody loves them, though, don't they, Michael? Oh, yeah. I mean, robots are one of those... I think they're the media darlings right now when it comes to the AI space. If your robot is impressive enough, people will think your AI system is impeccable. But if you run out with a cardboard robot, and I know because I have one, IBM open-sourced one, people are not so impressed. But, you know, whatever the form factor is, robot, website, mobile application, that still exists separate from the AI system itself. In the case of a robot, you may have multiple sensors, multiple inputs, similar to the case of smart cities that we heard earlier, right? A robot might see, a robot might hear, but it's an AI system on the back end that's taking that vision and categorizing, oh, that's Andross, oh, that's Michael. Taking the audio and recognizing, oh, they're speaking English, and oh, he's saying hello to me. And then taking some other algorithm to say, oh, I should probably say hi back. But in the end, separate the robot and the inputs from the AI model itself. Most people confuse the two. So humans love form factors that are, you know, that they can relate to. A little bit of latex and fake hair goes a long way. But it's not, you know, that's really not the solution that you have to think about. And many of these robots, by the way, are not as easy to maintain as we thought they were. They overheat, the gears break down, and so on and so forth. So we've had plenty of experience with that. I went into an Incheon airport on my way over here, and they have one of the iRobot form factors providing information about gates and time of flights and so on and so forth. And as I walked up to it, it decided that it needed to dock because it had run out of power. So I didn't get the opportunity to play with it. It said, I'm sorry, I have to find my docking station. So what we have is, you know, essentially this perception of what AI is, you know, between machine learning of the scientific and academic space through to this concept of artificial intelligence like HAL. And, you know, it's all really relevant somewhere here in the middle. Watson is real, but Watson for playing Jeopardy was a fit for purpose system that cost billions of dollars to make and build a room like this of computers. So is it practicable? I hardly think so. Was it useful for IBM to build frameworks and learn how to actually provide business solutions? Yeah, absolutely. So why are we really talking about AI today? You know, Kerry and I went to school and we learned back propagation and neural networking and annealing and all sorts of crazy algorithms. And those algorithms are really the same algorithms that we're using today with minor differences and tweaks. Well, why is it that what the AI has come, you know, here and into the fold today? Well, one is the ubiquity of computing power. Storage, massive amounts of storage, compute and networking. So cloud computing. The other is the vast amount of data that we're generating because of these great little things, you know, these mobile devices that we all have. And those mobile devices are generating information that is used to train the algorithms. So lots of training data and lots of training data that's provided in real time. The other is the miniaturization of devices in general, specifically devices like, you know, accelerometers and temperature devices. And, you know, this device, I think this cell, my particular phone is a iPhone XR. Just got it not so long ago. And it has like 300 different measuring devices. And one of them is barometric pressure. And if you're using an IBM weather app, because we own the weather company, we're actually using you as a little weather station. It's part of the agreement for you when you download the app. Your phone is taking barometric pressure and we're pumping that data back into IBM and learning about what the local weather is like in your area. But I think it's important to recognize that, you know, we're not unique in doing that. There are other apps that likely everyone here uses about traffic of some kind. And when you use that traffic app and it tells you when you're going to get there, you find out mid route it lies to you, right? And it shows you you're going to get there 20 minutes later. That insight, that real time data insight, whether it's in Google, they use green, yellow and red as a way to distinguish how much traffic there is. That's all user generated. And it's not obvious at first if you just use something like Google Maps. So if you use another app like Waze, Waze is all about user input. You are prompted. You are told, hey, there's traffic here. Do you agree? And you click yes or no. And in these small kinds of interactions enabled by these portable computers, we're creating data that can feed AI models, that can feed AI systems, whether it's something that just says, here's the weather according to the phones in this room or whether it's something that says, you know, I'm fairly certain there's bad traffic in Singapore because of all these cars and all the phones inside the cars. So the diagram in the back of me shows essentially the difference between cognitive, machine learning and AI. AI is a convergence set of technologies. Cognitive computing is prescriptive analytics, essentially. And machine learning are those algorithms like neural nets that actually allow you to identify patterns, visualization, for example. This robot up here is a robot the NASA folks created, right? Wasn't it NASA? I think. Oh, sorry, BBC. I knew it was one of those big organizations. So anyway, BBC created this robot, some of the engineers. And really, again, this kind of speaks back to why AI is becoming so interesting. It's the miniaturization of all these devices that create, you know, relevant real-time data strings that can be used to make decisions around these decision frameworks. I love builds, Michael. So I'm going to let Michael say a lot about this because he knows quite a bit. But generally speaking, there's two types, and I talked a little bit about this, two categories of AI. There's generalized and then they're specialized. And it's not intuitive as to which is what. So when you use these new modern-day frameworks for business, they're specialized frameworks. A lot of the work has been done for you. They're essentially neuro-learning algorithms that have been built for you to use for your business, whereas the specialized is a lot of pieces, parts, and you would have to put a significant development effort behind, fund it, and then develop some solution from those parts. And we offer those parts as in like any other company. We have a partnership with Google. We have a machine called PowerAI, which has all of this stuff built into it. And we're using that quite a bit as well. But quite frankly for you, you're not going to spend $60 million for a relatively minor project in your business and then have to sustain that. Maybe Netflix, maybe Google, maybe Amazon, maybe IBM, you know, these innovators of innovators, they're going to do this, but not you. You're going to use democratized APIs and fit-for-purpose solutions. What do you think? Yeah, I mean I really think it's, what you see in front of you here that is really responsible for the AI explosion that we see today. On the right side, my right, your left, the general-purpose AI frameworks are very powerful because they are a tool set. So if you have some problem that you want to apply AI to and you have the time, the skills, the funding, or the desire to learn on any of that, go crazy. And with those sets of frameworks, you could build a model to abstract any kind of problem, connect it, point it to your data, teach it how to understand your data, make your data good, curate it well, and off you go. That effort, though a noble and worthwhile effort for certain kinds of business problems, has a lot of work involved with it. The majority of AI projects do not pursue something like this. That's what we're generally seeing. Instead, to try to make AI more accessible and Andreas used the word IBM likes, democratized, you have this idea of specialized AI frameworks. You can think of these as AI prepackaged for a use case that is as easy as connecting to Twitter. With any of these capabilities, all one has to do as a developer is get some API key, read some documentation, and throw data at an endpoint. And you get back some kind of AI brilliance that you then embed into your business logic that makes your team look like a bunch of professionals. This is what has made AI explode over even hackathons, right? I've had opportunities to work with first or second semester computer science students, people with low coding backgrounds. Again, give them documentation and a web service and even they're able to start running with AI. The caveat with these specialized AI frameworks that IBM and other vendors provide is that you have to understand how are these different AI use cases spelled out? What use cases do these different services address and will they address my use case? This is where vendors like IBM and our system integrator partners and our system integrators of the world are really showing expertise that clients come to them and say, here's my problem, I want to build a chat bot or I want to build a question and answer system. And then based on details and requirements, you end up with a recommendation, IBM's got a capability. Maybe Amazon, maybe Google, maybe Microsoft, maybe some other company not shown here. But it's this ease of access that is really fueling the AI explosion and the innovation and the fact that everyone everywhere is doing AI something. There's no goodness in the general purpose world, but the prerequisite of deep expertise such as PhDs, machine learning, maybe even an AI background makes that not as common. If I'm to be candid, I don't have formalized machine learning AI training. But I've done a number of implementations, the number of clients, thanks to many of the democratized capabilities of specialized AI. So this is where we see much of that action today. And I do, but I would say Michael is even ahead of me a lot of time with these frameworks. And it's really in these frameworks that we're beginning to see the application of AI in general. So you have the innovator's innovator using these really deep learning frameworks in PyTorch, in Cafe, and so on and so forth. Instead of actually hand coding them from research papers which you could buy 60 or 70 really high powered PhDs to go do, now you actually have this wrapped up in a package that you could basically spend half that. Then you're going to try to apply it to your own business. It's still a a long mountain to climb, a high mountain to climb. But it's come down quite a bit. And so you're seeing a lot of innovation that's coming out of the general purpose space. So the specialized space has traditionally been all cloud based but now we're seeing second generation AI for the enterprise and you're seeing many of these AI frameworks actually be available for your enterprise on print. Up until recently, you only had the API implementation. Now there's still API based in the development approach but the solution is ending up on things like OpenShift and IBM ICP, IBM Cloud Private that is being installed in the enterprise itself. So as a result, you're seeing some of the heavier duty, still large largely expensive solutions like IBM as an example, IBM machine learning via API based. Which is relatively new. So Michael, tell me all about these lovely cats and dogs. Certainly, and we'll buzz through this to keep on schedule here with about a minute left. So when it comes to understanding machine learning, again in this session we're not going to give you a very deep explanation. There's plenty of great resources. We have two folks who've hosted themselves as having formalized education on it. But at a high level, you can think of learning as either supervised or unsupervised. In the case of supervised learning you provide data to a system and you label it. In our example because I love dogs, not so much cats but they're tolerable, we bring a bunch of pictures of animals to a system. I'm the cat person. He is a cat person. We bring a bunch of pictures and for every picture we might label it dog or cat. Optionally, we might have another set that's called negative and that's neither dogs nor cats. Some lesser animal it's not fit to be a pet. As you give the system picture after picture this is a dog, this is a cat, this is a dog, this is a cat the system has a memory of what makes a dog and what makes a cat. Now for this to be good, you want to use different types of dogs and different types of cats. That way when you then submit to a system a new picture it looks at it and says, do I think this is the thing you showed me that you called a dog do I think it's something that called a cat or do I think it's none of the above? This type of system takes advantage of some of the machine learning methods such as regression classification. But all you really care about from a business standpoint is is this a dog or is this a cat? I know amongst many of your clients in your own businesses you likely can think of a few problems where you want to know what type of a thing is it? Very simple basic problem and so supervised learning can be used to do some of that. On the other side we have unsupervised learning and this is based on methods like clustering. In this case you just throw all your pictures at the system and you do not give labels to your data so you don't use the word dog or cat or not dog or not cat. You just throw the data and the system looks at the data figures out what are the ways that this data can be related to itself and then creates clusters. Using established patterns. Using established patterns or it might infer them on its own based on the type of learning and at the end of the day you end up with oh these two cutie pipe pops look very similar. I don't know why they have fur, they have cute little button noses, I've got ideas and by the way these somewhat cute meat looking cats, they're all one other group. I know because they've got the pointy ears, they've got a very stern look and they've got wide eyes. In this way you know we can teach the system the same thing but by telling it explicitly where we know the language we want and the outcome we want or letting the system come to its own conclusion when we don't really know what are the patterns to be identified. I hear cats are better judgment characters than dogs. There's machine learning and then there's deep learning and from where I stand Michael deep learning is really all about things like back propagation and the ability to actually learn from other AI learning frameworks. What is it for you? Deep learning? Yeah. Well when I think of deep learning I think of pulling humans further and further out of the problem because when I think of classical machine learning I think of you know the person in that chair spending time trying to figure out what are the meaningful features is the word we use in our data right? In the case of our cat and dog example right you might have fur versus hair, you might have nose you might have eyes, you might have the prevalence of whiskers. A human can do that in general machine learning but in deep learning we tend to leave that up to the system. So what you're saying is that in machine learning I need to basically describe that a dog has a body and it might have four legs in an image and it has a square face but a cat has pointy ears and a tail and then try and see what your outcome is when you run it through the neural network and then you go back and change the scoring right? Yep. So in deep learning you're using things like IBM's recent neural chip What's the name of the neural chip? Oh you caught me I don't know that one. So we have actually a deep learning chip that is used to actually program deep learning algorithms and deep learning algorithms usually take multiple layers of machine learning and use back propagation and automate feature extraction from already set patterns. So it's a lot more complicated and it's kind of closer to the metal but you can get better results if you're looking for something very, very specific. So any questions up to this point? Do we need to grab the mic? Yeah, I guess. Well they want you, I think they want you on the recording that's the thing. Yeah, the mic's right here. Ah There it is. Alright, and I gotta get you for the people on the chief seats, go ahead. So two questions actually great presentation by the way, I like the dual presenter format a particular thought leader that we all know well has expressed concern over AI and has labeled it an existential threat. Question one would be your thoughts on that and then secondly closely related is thoughts on national or international regulatory bodies that should be looking at AI, R&D. So yeah great two questions. So let's talk about Elon Musk a little bit. I think you know setting aside the fact that he likes to be controversial and stir up dialogue which is good because I think this entire space is going to move very quickly but beyond that he probably benefits more from AI than any person in this city right? You know self-driving vehicles or AI systems right? So he gets to see this stuff in real time and I think that he said it himself that he's you know kind of amazed at how quickly this is all evolved and he's concerned with you know the fact that you might have machines that are thinking at speeds faster than humans I absolutely think you're going to find a machine that is better at a particular set of data you know of finding the answer using natural language processing than a human is going to. That's true today too but the question is will machines ever be able to reason and is being alive like you know being an organism that is a living animal give you a leg up in many ways and this is a very existential question some people are asking and it goes all the way down to quantum mechanics but for example one of the things that make you very unique is that you don't live forever so there and you have survival in the real world that you're faced with so there may be some experiences that you will always have that a machine will never have in its current form now if I believe personally that we will in the further future probably not be replaced by machines but integrate you know devices and technology into the human experience in fact a lot of folks say that this right here already is integrated with you I don't know how many of you had this experience where you put your phone down leave it and then somebody says you know when is that ball game or something and you immediately think I need to enter that information into my phone and your brain is already integrated technology into the fiber of your being so technology is actually becoming more part of the human so we are probably heading more towards being a cyborg and being something different than having a machine replace humans that's what I think now if you want to read something entertaining that along the same lines go check out the most recent book by the guy who did DaVinci's code origin it's really relevant to the conversation that we're having today I won't spoil it for you so the other question was what was it again oh regular so we have heard people in the federal government space in multiple countries in the EU and the UK talk about what is the euphemism they use it's called algorithmic transparency there is an algorithmic transparency today so why they believe there is going to be just simply because we're using learning algorithms I don't really understand that but AI and learning algorithms are still dependent on the business process being defined in the business process the step one through A what it is that we want these systems to do is still very much part of a solution that isn't going to go away it doesn't just magically decide how it's going to serve your customer you define how you're going to use these machine learning algorithms in the context of the overall business process and in so much as that the government would like to have transparency for its citizens or the other way around I agree that everybody should understand how they're being treated in the systems that are supposed to reflect the law we're constantly updating government systems to reflect changes in the law but how do you know they're right well you have some oversight I guess that's a good question so there is a reason to be concerned about whether or not the IT systems that we have today reflect the outcomes we expect with respect to the law and how we treat our citizens let me go around and I'll add one thing to that answer that Andros just provided in my work at least I find most of the bias from AI systems is intentionally provided to match cultural expectations AI systems may show you something that you may not like about your business process and there's a tendency to just rig the system a little to the left so it tells you what you want to hear this is not uncommon and we're likely to see a lot of it probably for the next maybe decade, maybe less and so laws coming and bringing any kind of regulation are not going to fix that and that's going to be an evolution of business I think later on by the way we'll talk about bias and we'll use some really good examples that have come out recently around bias your comment about the law changing the system to match the law my understanding of those kinds of things requires a lawyer and or many lawyers and or a judge and or a supreme court to get involved so interpreting the law is in and of itself perhaps a requirement for machine learning and artificial intelligence my point was simply that you know we have IT systems that are out there today that are reflections of our you know legal and social framework and you have to ask yourself are they really reflective of what the goal was ultimately you don't know as a citizen most people just take it as for granted let's do one more question and then I think we need to move on I'm pretty sure I turned it on go ahead thank you the question here is regarding the ROI the common business because what everybody is talking about is the AI if you want to implement and the cost is big and because that's one of the scary stuff most of the society or the business for the SME mainly the SME player they are not ready to take a call but what is the call, what will be the transition is going to be happening like now it's very much specific to some industries as you mentioned some of the industries still they are scared about that because the data of the volume data what you are going to analyze that is one of the counting it's going to take bigger costs and they are not ready to spend it but again everybody is going towards the digital transformations and this is one of the AI is only the way we can able to materialize it this is some caps and the technology influence factors where is going to match the other party to have it won the first question another one is on the same questions and regulations as an individual like Alexa and Google Assistant suddenly what is happening it's activated by the name but if that is the case always is recording all the information that means is hearing us what is the security is there what kind of private privacy is there with the home when you are in Alexa Google Assistant all your message will be recorded using it for some other purpose what kind of that is one of the things maybe the regulator may be asking we want to see your engine how it is working first because are you taking only by command and then it will be activating or in general everything is going to be captured and it is going to be taking for your own purpose so let me answer the second question is actually quite interesting so there are companies that are offering you know free online social services you know free services in general that they are using to analyze your behavior with and resell the data there are companies currently that are providing AI solutions that actually use the information that you provide at a lower cost and they reserve the right to learn about AI and the data that you store in their cloud that is not my company's approach in fact my company's approach is not to explicitly not compete with the entities that use our products but there are companies that are competing with those companies that are using their AI frameworks and so you have to be cognizant about the implications of that and ultimately what happens what do you think Mike? The saying has always been true if a service is provided to you for free then you are not the customer you are the product and we know this in the case of our favorite email application and perhaps a few others and for those of us who just want free email sweet free email right even better less slash fewer ads and as an individual interacting with a big tech company right something that rhymes with Google you say that's not a big deal right I'm not in the email business that doesn't really matter but what we see very often as Andros was mentioning is the cases where a business is partnering with another business and that other business also happens to do the same thing in this context client does and all the great analytics and insight whatever they're provided by their technology partner is now insight that that same technology partner can use to enable their business to be more effective who for instance is in direct competition with this customer in this context so there's a lot of weirdness in the world of AI similar to how there's been weirdness around genetics like I don't know how many of you all have done any genetic testing my background is in bioinformatics it's technology genetics back in the 90s this has been done on a national scale in some countries where they paid a whole company to sequence everyone that company went under and as part of trying to come out from under the debt sold off all that data so in the case of things like 23MB I don't know that I'll do it unless someone forces me to but people are happily spitting into these cups to find out your x% whatever background but once you've done that you've given away that data now I guess the average person isn't going to get into the business of DNA sequencing but what if one of your DNA sequences is a drug? well it's like narking on your future self too if you get yourself into trouble right? so I think it's a gray area where regulation still hasn't caught up that if you're providing AI services capabilities analytics there is some level of transparency to say hey this is how we're using this data let me explain it to you in human readable language and you can choose whether you still want to work with us or not in some cases you're okay if they read your email and you don't mind because you're not competing in other cases though and if your business is relying on it for enterprise function you don't want that to be used against you either by that company or another one of your competitors using that same service so you have to be cognizant of who you're using and whether they have parasitic kinds of business practices because we have seen these companies actually launch using this data their own brands against the folks that are paying to use their platform so well should we continue on or what I'd say we should continue on we're a little bit behind but I think we'll make it out okay incorporating AI into your business we're going to talk a little bit about the human in the middle how much work should AI do I mean how should you apply it in your IT systems to make it properly work for you what projects should you start with and maintaining and evolving your AI models and then after this section we'll take a break I believe is that a break after this? No it's a break after the one 330 and ultimately we are going to get to show you the real deal real stuff working real stuff and some of the staff members are coming in later because they are really interested in what this looks like because we pulled in things like TOGEP into some of the AI and created a chat bot for the open group so what we're saying here in this chart generally speaking is that if you are a business or organization and you don't already have a data governance strategy and you don't have an analytics strategy and you don't have any data scientists or any kind of that competency in your organization whatsoever then using AI is going to be a heavy lift because AI is all about the data your provenance over that data and its structure all influence the usefulness of the data and how much of that can be used by AI so there is a process of making data ready for your AI there is a process of using analytics to understand which is the most important data the other thing that's really important for folks to understand is that there's a lot of reference data sets you're using in your business company or business or company right now organization that are probably owned by somebody else you just don't know it and when you get into AI and you start using training models or reference models you're going to end up paying for that data one of the things that we did was we figured out early on because of Watson we needed access to all of this amounts of data so we were already pulling in the entire internet four times a day just the good parts Wikipedia primarily we excluded Urban Dictionary for those who are wondering but we also realized that if you were going to be a lance or jeopardy question you're probably going to have to know everything about anatomy who owns all of the information about anatomy folks maybe who publish grades and add them well are they going to give you all of their content in machine readable format for free guess what the answer is no and it goes on and on and on and on so somebody owns these data sets by the way we have actually acquired the rights to many of them and we're allowing our customers to use them either for free or for a very low cost but if you go outside of our ecosystem you're still going to have to use some of these reference sets and when you use them you're probably going to be paying somebody for royalties eventually well if you're a consultant and you're looking to figure out how to make millions and billions of dollars with AI this is it you can leave after this slide because this is really the journey that every organization will have to go on the rest of our session here is great I promise but if you're a consultant this is it and what you'll find with many organizations is they don't have a way of thinking about these problems for many of them it's they hear new technology is in the papers analysts say this thing is going to change the world they hire a guy or pay someone to talk about how great this thing is maybe spend some money into a project and then move on in the case of AI this has to be something that you prepare for and something that you build foundations to so Andros talked about knowing your data and detail what that looks like later understanding what's your relevant data again you can have all the data in the world but if you don't know which parts you care about you can't really design an effective system around it and we see this going all the way up to establishing trust in systems most people that have AI right now they trust that the answers are good enough but they don't really have a way to explain or justify it earlier today we heard someone assert that it's okay if a system does something and humans don't get it but we agree with that and there are companies like IBM and others that have designed capabilities to help you understand what are the biases in your data because bias isn't inherently wrong but you have to understand what the bias is to make the assertion that your AI is not an ethical we're going to talk a little bit more about this later on too but an AI system has a different maintenance life cycle than your IT system so you're going to have to have the skills of the AI system that is somebody watching the training and constantly watching over the learning process and you're going to have to have so that means there's going to be somebody who's representing the business who understands where the business is going to make sure that the outputs from the artificial intelligence are going in strategically in the direction you want to with your organization you're going to have somebody understanding the implications of training making sure the data formats don't change overnight data formats can change and then all of a sudden your data might be training your AI system to believe in something else and we'll talk about what the implications of that is as we go forward here so an AI can be augmented intelligence not just artificial what do you mean by that Michael? well you know, AI as an artificial intelligence was the original name of much of this field I believe it was also a movie about a cute little boy robot going home but what IBM has found after actually applying AI it makes more sense as augmented there are some people who are brilliant who wonder and worry that AI computers will replace humans at some point but we still believe in the value of having a human individual partially because it eases the complexity of ethical dilemmas like if the AI told me I should punch him and then I opted to follow through and punch him that's still on me and the AI said to punch him and the robot then we have some complex moral dilemma this helps account for errors in your AI I might look at the AI's analysis and say I shouldn't punch this guy he's a cool guy and he knows a lot and we work together that's a really dumb idea I can then correct the training data so we see humans being a part of these processes as something critical because as a human being I can see that data that comes out I can see the recommendation and then make an assertion of is this surprising to me is there something about him I didn't know that says oh man he really needs one or is this genuinely an error in the system this is where I think many businesses are going to have real Eureka moments also too Michael don't forget that in many of the AI systems that we're building we're putting the end user in the place of helping train the system so the end user if they notice that the chatbot is off can actually give some really good feedback we'll show you how that works that tells us that the chatbot or the assistant and the neural network is not trained properly so how many of you guys know Grady Booch? Grady Booch, yeah you got a few so Grady is actually an IBM research now he's not really certainly has passion around software archaeology these days but he's really working on Watson's doing a lot of work with NASA and AI and one of the things that he tackled was the think tank around is it is it possible to have a singularity and any time in the near future and when would that be so they think tank that he led came to the conclusion that basically it was probably a couple 100 years away and then even then it's more difficult as it turns out for an intelligence to escape a form factor like a computing system than it is for us to integrate it into our daily lives or our existence as almost the cyborg like character that evolves into a more machine kind of human experience so we're really thinking about human in the loop here and not you know AI by itself right yeah I mean that's essential as mentioned right because the AI system may make a suggestion that is a horrible horrible idea and the AI system may make a suggestion that feels wrong but when a human takes a look they realize it's the right idea and again at times the AI may make a suggestion that is fueled by the data but that may need to be adjusted for cultural expectation whether that's within the society or within the business alone human in the loop makes all of that much much easier and we see that as probably the way of the future for likely a quite some time right so there are we have seen some customers who come to us especially on the fintech side and they say you know what we're looking for is a predictive model that tells us when there's going to be a next economic downturn or something just crazy like this and then they give us their data and then there's a lot of you know craziness in this like for example all of these financial institutions have analysts all the analysts right basically reports in different formats in unstructured format usually that ends up in a pdf and each one has their own way of characterizing the information that they're providing so trying to synthesize information and even tear apart the data in these reports is very difficult so that's the first problem the second problem is you know doing predictive and analytics is not ai so much as it is analytical science which gives you a range of potential predictive options and so folks again have made some misunderstanding about what ai is versus analytics and statistical analysis and the implications of the data itself yeah I mean this is where learning can go too far right people see ai and they see the way it's portrayed and IBM has the best and worst commercial simultaneously who we position ai as this wonder can is a silver bullet in many ways ai can be a silver bullet because it addresses a domain of problems that prior have been really hard to address but when it comes to building a system to do something novel that's game changing for your enterprise it doesn't all have to be ai I mean Andross's examples come from and their initial understanding was just give this data in a random format to ai and tell it economics yeah the other area that we get into where folks want to use ai quite a bit is cyber security they say that you know you've got all this log data and all this information you are generating a lot of it especially if you've got a sim that's running on a lot of your devices but how do you find the needle in the haystack and how do you predict the next attack and this is something that my team actually worked on quite significantly with IBM research turns out it's a really really gnarly problem you can't really predict something that has no basis for understanding because every single cyber attack is kind of a one off and so you can predict you can baseline what is normal performance in your organization and identify a serious anomaly but guess what by the time you actually do that you're probably already been hacked so we did actually do a few interesting really interesting ai based machine learning cyber solutions and my team actually was the original developer of the IBM cyber security solution IBM Q-Radar with Watson Q-Radar with Watson or Watson-Cyber and what it does is it helps you actually take all this information about cyber threats and curates it and gives it to you in a way that correlates it to the type of attack that you might be experiencing so you can do that not a problem and that is ai but it's not actually trying to find the unknown attack vector which isn't really possible and I think it's worth emphasizing here that for a company like IBM that we sell IT this meets sense for us to pursue because we have a security product we have AI capability and we have clever people and teams led by guys like this who say we're going to buy a product and we'll address it but if you're an enterprise that doesn't have that budget but you still want to do AI and security you're back to our earlier slide do I want to try to do this by myself do I believe I have a skill set and a need to build a custom approach you might I can't speak for every business out there is it feasible and reasonable for you to buy a vendor provided product that might have that kind of capability integrated within I mean this is where we go back to the notion of democratized and so when we talk about picking the right projects my biggest thing is always figure out what's been done before if there's models and data and a vendor like IBM or others who have a thing you can use that does most of what you need and all you have to do is just curate your data and send it in real time that's a good project but if you do something that no one's ever done before that's a hard project and if you're a company that wants to sell AI solutions great if you're not you know be prepared for a life changing experience but a lot of good lessons learned so when you go into a project Michael I mean what do you look for I look for I look at the data first do they own the data is it structured in any way how much effort am I going to actually expend to cleanse it and put it into some format and is it learnable right AI wants to learn from the data so if you've got data that's coming out of a giant pipe you know in real time that's good data in a lot of ways if you're talking about a very small you know set of data that doesn't really change very often then it's not really very interesting what do you think? No you're exactly right I mean the hardest problems in AI right now where the algorithms are known are hard because the data's not there and you know there's all kinds of really cool approaches to making up I won't say falsifying making data from existing data we call it data synthesis it's a better word than lying it's where you take whatever you have and modify it to better represent reality because you know the reality of the world is him and I can work together over several months and create a great chatbot that says great things but we don't know what any of you are going to ask we can come up with a hundred ways to say what is the open group and I bet someone in this room in this conference is likely to come up with a hundred and one way that we didn't think of that is nothing like our other hundred ways so the AI still misses it and that's just the reality of things so when you have a lot of data that gives you some greater semblance of having a better system early on probably the core of your business probably right so tell us about this chart I mean there's lots of different roles that now get established in your enterprise that you have to consider that you probably haven't considered before that interact in ways that never interacted before right? that's exactly right when you think AI I know the tendency is to think of IT alone and we put on our propeller hats and say we'll go fix it right we'll get a wrench and start banging on something but the reality is this problem this domain is very unique in terms of the groups that have to come together I mean at the core you have I would say at the core mostly but at the core is some business process owner that's my take on it you have someone that has a need they have something that they're trying to do and there's an opportunity to do it better maybe they have to read a hundred documents every month and that takes a lot of their time maybe they have to write minutes every week and that takes a lot of their time and they have this great idea that if only they could have a system that could do some of that work they could do greater things they could focus on the harder problems you connect that person who knows what the business is trying to do and you start connecting them with these other pieces you might elevate that up to the CIO or CTO and say hey we want to look at new ways to address this we might then go to our data science folks and say hey what data do we have that is if you have data science folks so you have to have some data science folks eventually eventually that's correct and you've got to have a CTO organization who's kind of thinking about things outside of the box that definitely helps that's for sure and then you need the developers to understand what it means to actually develop these applications because these are very different than the normal structured programming of Java and C and C++ anything else well you've got your ops for AI piece so we talk about data we talk about your data scientists can say here's the data we have here's a way we think we can build a model around it Andros mentioned the app developers they build the pretty front end that does the thing that connects you to the AI but then separate from that you have this ops for AI piece and this is perhaps one of the most critical pieces in this whole picture and these are your doctors, your nurses your caretakers of your AI system I mean I don't like to call it children but think of it like a pet you have to care and feed it you have to occasionally ask it to sit, to stand to roll over and you have to validate that it's doing the thing you expected to do and these developers are off on a new project these are guys that are watching things operationally and the beauty of it, at least for some vendors IBM I give us credit as being one of them these systems exist in a loose coupling so if your AI folks decide that they need to pivot because the business process realizes the business is going in a new direction that can all be done and your wonderful new front end that looks pretty and renders on phones nicely that remains relevant you might need a few tweaks here and there that might change significantly but generally it remains stable and we see that as important because this AI model you might have a nice little mobile website you might have a big enterprise application you might have a reporting system and all of those different applications may use the same underlying AI model for different business processes so separate the model creation from the application development I think you kind of talked a little bit about that so the ideation, the design thinking is the data scientists who's trying to figure out whether this is actually going to work or not they're really different from the app developers who are being given kind of the template for how you want to integrate this into your overall IT environment that's exactly right, this to me is critical this is the slide I carry into every customer moving forward because when they think of an AI system being created they want to call it software they say yep, you'll write it you'll have a guy sitting next to your AI guy or gal and they'll do it and they'll be done but the reality is the AI work frankly kind of never ends and that's not a bad thing at the same time you likely want your AI model your AI part of the system to be ready a little earlier than your application because in the process of building AI you always learn something and you might realize that you characterized your business problem in a way that doesn't really make sense for the AI system and what you don't want to do is have a finished beautiful front end application that then is built on an understanding that's dated of your AI model because that might include some rework so Michael, since you're millennialish I'm technically a millennial you can call me a millennial I don't get upset well we got Zs out there now my daughter is in design thinking industrial design so she's a Z so you're getting on talking about the generation but tell me you're programming in a whole different set of skills and programming languages than in the last even five years give me an example of some of the programming skills that are relevant to actually doing this work I would say the biggest one is and I want to connect it to design thinking a little bit the thing in the AI programming world whatever language you use it's ultimately irrelevant you could likely integrate AI with COBOL if you wanted to I'm not sure why you would but you could where the new set of skills that's required and customer by customer will have a different opinion about who owns these skills it's going to be around understanding how to look at a business problem and determine what kind of AI system you want to create for that and understanding how to tweak and shape because even if you're on the far left again my left you're right of specialized systems that have an API you pull an API key, you throw data you get insight back there's still expertise required to say how do I go from natural language understanding to a meaningful application like a chatbot how do I do that abstraction down to something the AI model can understand so that I find goes hand in hand with design thinking with understanding how to shape your problem in terms of the AI can understand and operationalize that was a very millennial non-answer to the question I asked but I'll answer it for you so when I started working with Michael and I started working with some of the AI frameworks I realized that Python was used heavily JavaScript is a foundational requirement but JSON on top of that Node-RED and general scripting and big data databases like Couch, DB and several others like Cloud if you don't have those skills I'm telling you every single one of these frameworks is using those programming language to manipulate data so it was an interesting experience for me that I had to catch up to because I was a Java C++ C guy and so I had to learn all these all these interpreted languages well to be fair you can still use Java we even support it as an SDK I know but that's not what you've seen the most of you know that's fair so we're back to another Q&A period how much time we got we got 28 minutes before the break so we're behind quite a bit alright well we'll take a question who's got the question? let me see the hand again there we go let me bring you the mic thank you you covered a couple of items there in terms of data and also structured data unstructured data as well which brings a couple of questions to my mind and we're talking about using AI for business processes so would it be an incorrect assumption that you could use AI to point out or bring about business process that needs to be resolved by just using unstructured data because sometimes data could be clean, data could not be clean as well so could you then apply artificial intelligence on either unstructured data which is not clean to help improve the business process sure absolutely I mean that's part of data cleansing we do that all the time in fact we have annotators now that take information and scrub it from PDF files in fact Michael is going to go through a whole painstaking effort that we took TOGEP and we pulled it into discovery Watson Discovery and what we found out was that there wasn't even though you look at TOGEP and you look at the book and it looks like there's a standard set of structured chapters and sub chapters and so on so forth you had to actually use little exponential data wrangling cleansing skills to figure out where a chapter began and that section header and different sub types so that you could actually categorize those and feed it into Watson because otherwise you could get a giant blob of text and it becomes non-context sensitive I guess that's the right I mean human readable is not necessarily machine readable and we've known this for quite some time but the reality is some of the great sources of insight to feed AI systems and it's the point where companies like IBM have created products around this take advantage of data and forms that we never really built for anything other than people they just needed to look nice so we'll show in one of our examples how even some of the great open standards that are out there were built for human consumption and so putting them into a computer system is a little tricky because they represent combined human expertise there's no single database of TOGEP because TOGEP is a complex thing a lot of great insight ideas and guidance so we'll show a little bit about the process and show you the end result of the kind of systems you have and the kind of value they can provide yep all right we're going to take a break no one more we do three and then the break is three-third so we've got 15 minutes until the break implementation best practices all right so it starts with data we talk a little bit maybe we can we can probably zip through this as a speed run evaluate, continue on my problems handling easy AI and addressing harder ones so this is my favorite slide I just heard the story a few times so my son we we live in a neighborhood where there is a grocery store chain called Harris teeter and when he was very young three or so years old you know we said we were going to Harris teeter and he goes oh we're going to Harris teeter totter and we were like yeah that's a great name for that grocery store we'll just call that Harris teeter totter from now on out and we and my wife would go hey I'm going over to Harris teeter totter yeah okay I'll see you later and 17 years went by and we were out in the parking lot going to my parents house and we dropped by Harris teeter because we needed to pick something up and we were in the parking lot talking about what it is that we need to get and we're having this heated discussion between my wife and I no I don't really think you know it's fine and all of a sudden my son goes hey it's not Harris teeter totter it's Harris teeter well why is that important or relevant here because you can teach in AI the wrong thing and essentially my son kind of learned you know it wasn't intentionally mean we all thought it was an open joke but he never realized that he never really read Harris teeter he just saw you know in his mind saw Harris teeter totter your AI can do very similar it can learn the wrong thing and give you the wrong answer and so you have to test your AI against reference data to make sure that it's at least on the right path and then make an incremental step as you go on to train it for more instances Michael do you have anything to say a critical assumption in what you've just shared is reference data this is why in that nice ladder the consultancy cheat sheet the foundation is having your data if you don't really have a reference data set of whatever you're trying to characterize this becomes super hard if none of the Harris teeters had any kind of signage his son would still be calling it Harris teeter totter it would be the world's longest lived open joke so without reference data looking in your business and what's the data that says yes this just happened AI becomes a much harder problem because you don't know if you're right or not hence why we think it starts with the data without the data there can really be no AI that's central to your business beyond simple solve problems like visual recognition of a car or you know natural language processing based on maybe an existing model like the one in IBM Watson or other vendors so if you want to create something super customized to your enterprise you'll need to have that data available otherwise it's I don't want to say impossible but it's probably close to that or are you going to have to acquire it from somebody right so you got to know your data too that means you have to actually have some you know map or data governance to understand what data you have and that's really back to EA without the data continuum in EA then you don't really have an idea of what data your enterprise has and what structure it's in so that's going to be kind of a starting point you know who's the data scientist or the DBA or the data team that owns your data repository right and if you again another consultancy pro tip if you want to really seem like a sweet expert and your client says you want to do AI just ask what data they have and once you get a sense of the data they have many of the AI ideas aren't connected to the availability of data but if you look at the existing data set and say here's what we have here's what we think it will tell us then you have a basis to start figuring out you know what might be a reasonable project like if I want to understand how people feel about me based on the text they send after we go out on lunch I have no way of solving that because I'm not a government agency with text from everyone's phones if however it was in a group chat and I wanted to see you know what the group chat was talking about and how they felt if I'm in that group chat I can download all those texts and I can do analysis on sentiment of the group around you know different topics but it's all about the data you have so before you come up with some crazy idea figure out do I have data that supports it or can I buy data that supports it can I get rights to it as on DrossMatch and is it annotated? alright yeah so if it's not annotated nobody really understands the meaning of your data so there is a pipeline of data that feeds into AI and you know the data warehouse of the past really kind of informs the use of our AI engine yep because ETLM and transformation is now a thing right so you still have to do that that work so what do you think? well I mean in the case of AI systems like and this is a perhaps it's just novel to me because this is the first time I've encountered it but the reality that you can feed some kind of big data AI analysis system starting from something super unstructured like a PDF or a Word document this seems like magic for those folks who haven't seen it before but it is now looking to create insight and systems that you can query from documents that are literally published for human consumption and every time somebody spits out a PDF you can put it in automatically but the gory details of making that work successfully will come later because there is some nuance to it so we've got to consider a range of problems yep and this comes down to understanding what data do you have right you don't just say we do this AI problem and we bet the business on because that's probably not a good idea but you say here are business processes that are central to our business if we did this 5% more we'd make 20% more revenue figure out what are the business processes that are impactful and lay them out on the table based on impact lay them out based on perceived risk lay them out based on availability of data whether internally or externally the more you're going to have to develop from scratch you use some of those generalized frameworks the more resources you're going to have to apply but it's a decision like any other decision in the enterprise to adopt technology and even though it's just AI and it's a great big umbrella you still have to be very intentional about how you pursue it almost like the way EA is developed you have to decide what's the right component so some low hanging fruit might be we've got some examples in the next slide alright so it goes down to picking the right type of AI based on your problem and this is where we figure out what are our low hanging fruits so my usual guidance is if an AI system exists that can reasonably solve your problem then you should probably use that existing implementation so what are good fun and easy examples of this well sentiment analysis is an easy one right that's doable everywhere you can do it on prem you can do it through open source libraries you can buy products and license APIs to do it visual recognition is another great one well I mean it's really relevant today with Twitter right so you're getting a lot of feedback from customers on Twitter or from social media you know doing sentiment analysis on your own documents is probably pretty dull but doing sentiment analysis on how people think of you know and perceive your enterprise well that's in a whole different ball game right and when you use all that data that's coming down to you in real time you can actually graph out your sentiment over a period of time alright and with AI systems where they are today even on the super specialized out of the box stuff they can even give you a finer level of granularity they won't just say that he was really upset about something they'll say he was really upset and he mentioned this AI session by two guys from IBM must have not been a very good session then you have insight let's not repeat that session it's that level of insight that's really valuable because it's not enough to just know that someone hated your product or hated the experience you want to know what they didn't like about it and likely you want to know that has that person have a pattern of criticizing things right that the person who bombed on this session also bomb and literally every other session they talked about if that's the case then maybe you might not wait everything they say as much versus if someone else who's been loving open group for years had an open critique after being an avid supporter that might be a data point that you might consider a little more closely you know the world's leaders they speak on almost ongoing everyday basis and you can take sentiment analysis from what they say and infer what might happen and this is something that governments are actually using AI for so you can tell whether or not you're about to get into a scrum with another company our country sorry because of the tone of the leader and how it escalates over a period of time subtleties that you Michael probably wouldn't understand well there's another fun another fun example one of our other colleagues at IBM another distinguished engineer and one of my first mentors at the company she she wrote an application that would do a tone check for all of her emails because she had gotten the feedback multiple times that she came off a bit critical and so now she's got a system that will analyze the tone of her outgoing emails so she can determine is this appropriately toned for my intent and again that wasn't a very huge lift or a hard application to create but she understood the need she recognized that she wanted to align her tone to her intent and to her audience and she was able to do that you know another low-hanging fruit is translating from one language to the next and you know I know that we do a lot of that here but you know some of these systems have gotten so good at doing that that they can do it in real time and be you know very close to you know absolutely correct so that's something to consider but we talked a little bit about you know some of the low-hanging fruit what you do it you know low-hanging fruit really where you want to go well you know you've got two choices here that you need to evaluate right the first choice is the choice where you say okay I know a bunch of these existing specialized AI capabilities I know they're easy to use maybe I have experience using them on a prior project let me look at these different capabilities from a single vendor cross vendors and figure out can I combine them in some way to create the new novel capability that I want is there a way for me to do some kind of analysis that says hey you know I can create a data processing pipeline where I get a tweet I run it through some analysis to do sentiment and then maybe I run it through a separate analysis to do inference and then I end up with a set of data and an inference engine that can tell me given the topic how do I think people will react that inference engine on its own recommendation engine is a relatively difficult thing but if you combine some of the existing specialized AI you can create it IBM's Jeopardy Watson is an example of a recommendation system where you were asking very explicit questions and it took all this data into account and made a recommendation of this is the thing I think it is there wasn't ever just one answer there were multiple but we always went with the best answer at the same time there's possibility that nothing in the specialized AI world seems to do it or it's not sufficient and in that case you might use a custom AI implementation using some of the lower level frameworks you can still pull in something like IBM Watson, Microsoft or Google or others but you might have to go whole hog still on that base of a custom AI implementation so we haven't said this yet but a lot of folks think that somehow RPA is AI is RPA AI well I'd say it depends on who you ask in the end write RPA is about automating a process and you have your inputs and outputs but it doesn't really go much beyond that but you might have RPA vendors who do some AI magic for you that maybe looks at data and makes an inference that says well I bet this should go in this business process that might be AI but at the core RPA doesn't require AI to happen I'd say it's not most vendors that are providing it are just providing automating and I'm not saying it's not interesting because I've seen some really interesting RPA implementations that save a lot of money but I don't really think of it as AI so deep learning systems those are pretty hefty duty that's a maybe a space military kind of aerospace application kind of opportunity there might be other applications as well but to get to that point you have to have a problem where you really understand the different factors that you want to pick up on because again when you think of any of these systems just in your mind picture creating a human being who has one job and teaching them how to look at the data and make some decision or recommendation again our vote you put a human between that system and the decision but think of that as the system so even in a deep learning system you know even with that approach you still have to have data that's useful you still have to know how to translate it to a business problem that you want to solve all right well I think we're we made it back to the questions and then we're going to do it we did we've got a solid minute for questions alright does somebody have a question minute question there we go again thank you very much pretty much so many things are happening here just have my question here is very straight when we are talking about AI the applications there one other point is the way is not matured if I'm not wrong still is not matured because we are also transiting from human walk to the the IOT is coming down to the floor and therefore now anything we are going to train it won't be applicable what I'm thinking is my understanding anything you are going to train the data it won't be relevant to the next digital revolution right how is going to be applied that means if you are going to make it a surprise unsupervised machine learning or those things is coming to the picture AI is coming and comes under unsupervised machine learning if I'm not wrong and someone has to monitor it what are the things is happening and any anomaly detections detections what you are going to observe it we have to retrain it what is your view on that AI is not pretty much robust as such now so I think that's part of the issue is that AI is about I don't ever I think that the training systems will get better but I don't see that you cannot simply just let an AI off the leash by itself it doesn't mean that it's immature it just means that it is what it is it's a learning system and you are going to have to watch it from a bias point of view and a training point of view and an output point of view what do you think Mike these AI systems also provide a window into our own souls right it gives you as a business an opportunity to understand what might you be missing today and where I say there is immaturity in my client engagements is the ability to accept that criticism or I've had systems that I've designed and led development of that the answer that the organization says is not correct and you know I've stood in front like a PhD making the case for the AI that this is very much the correct answer but the business said not the way we think and so we put the blinders on and the system now says the right answer based on their expectation to me that's the greatest immaturity is that many organizations that say they are data driven are only data driven in so far as they understand the data as far as their enterprise culture drive you want me to all right so now we've come to the meet for those people who like to see things that are real right or something like that it's very easy to talk about AI and it's very fun as well but this stuff becomes more meaningful when we take a look at some actual examples understand how they're built the types of AI systems that are being leveraged and then actually show them in action and this will illustrate you know the general process of adapting AI to business use cases this will showcase a couple of actual use cases in the open group with AI applied to them and then we'll have a bit of a discussion around you know where the system is today maybe what we could do if we wanted to move it to production right so our first example is an assistant or the colloquial term is chatbot really hate that because it makes it sound easy and churlish but in fact it's not what a chatbot or an assistant is essentially a conversation between you and an AI and like a conversation between two humans when I ask Michael you know something about cybersecurity he says no and that means I need to find somebody else but when I do find somebody who knows something about cybersecurity we're likely to have a conversation where I ask them a series of questions and you know they use their knowledge base around cybersecurity to provide me with answers at the same time a more motivated employee might say I don't know anything about cybersecurity but I can go find out for you because I know someone who does right so if if you have your cell phone and I don't and I ask you some question you could actually type it in and go find it and then still give me the answer which is a relevant analogy to the way that an assistant or a chatbot works so in that context what does that mean that means that I have to be talking to somebody who has a knowledge base who has a corpus of information that has been trained around the subject matter that I'm interested in and that means that you have to teach your assistant or chatbot the subject matter in which you want to use it and the use case that's most likely one that's familiar is a support use case so there's an awful lot of situations where companies or organization staff will constantly feel the exact same set of questions over and over and over again and they really don't enjoy that very much I don't know why so what you want to do is be able to provide an experience where you feel like you're having a conversation you want to create modality that kind of syncopates with the human themselves so it could be voice recognition that is being used or it could just be natural language processing using text which will be the example that we use today but in reality Michael has this setup so that you could actually with TJ bot ask TJ bot a question just you know via the voice and the microphone and TJ bot will pick that up and then we'll pass it on to the chatbot that he has integrated into it so in the case of the example we're going to use today we're going to use the open group professions standard and the support for becoming certified as the case study and what we want to do is create a chatbot or assistant that answers questions about becoming certified or moving from one certification level to the next or resetting your password or something of this nature and so what does that mean for the process that we're going to go through to build that well first off we have to know something about what are the common questions that are asked and so we went to Debra on the staff and she came up with a document Michael and I fortunately had already been building a chatbot for something very similar called Cary what does Cary stand for? Career Architect Certification Journey it's not the best acronym but I like it we didn't come up with I don't think I came up with a very good name of this particular chatbot I tried but I think it's the hardest part of these systems really naming them I think I came up with something like Toby or something like that anyway so first off you have to have some knowledge about the subject you want to know what are the questions that are going to be asked you certainly want to know what is the correct answer you want to know the entities that are involved so you want to know the the data entities that are part of the subject matter and corpus and you want to know the synonyms that might be used so that is there are a lot of ways for asking the same question and getting the same expected answer so I might say I'd like to know something about certification or architect certification somebody else might ask can you tell me about I would like to know about architect certification or I have a question about architect certification or even better something as simple as certify me right so let's get into this a little bit you've got the word document if you want to show some of these examples I'm doing exactly that excellent I don't know if I can get it yeah so here are some of the top questions that Debra came up with I'm interested in applying for open CA certification and would like further information and she even has a reference answer for us so one of the things that I can do is that I can actually integrate this within an existing system and I can pass in from the environment information about who is using the system now I didn't do this in this minimal viable product but it can start off with saying hi Andraash welcome to Bob or whatever the name of our certification chatbot might be can I help you with information about the open profession and I can also if you know anything about Bank of America and you might be from the United States they launched an assistant called Care Eureka and Eureka is an IBM solution and Eureka actually the button you say can you show me my balance Eureka and Eureka will come back with the information about your balance and you say I'd like to know the last three transactions about my master card and he'll come back with that and you can ask it all sorts of other reference questions and it'll come back with guidance on how to find that information so here it is first question was I'm interested in open CA the other one is open CA certified but I can't find my name in the directory how do I set a personal code I'm certified but I haven't received my badge that's a nice one I'm certified through my employer but they left the company how do I keep my certification I want to re-certify so on and so forth so actually because of the work that we did with Carey I got a little head start from the work that she provided to me and there was actually much more that you really needed to know if you were going to make this real conversational chat bot so I've augmented what she gave me with the information that we already knew so this is the Watson Assistant Canvas I'm going to go ahead and open Assistant Workspace and immediately you recognize that we have a few elements here one of them is intense the other is entities then you have dialogue and content dialogue we won't be going into content dialogue but let's say that again you want to get a jump start and you're in a particular space what we come up with is a taxonomy for you to use out of the jar so let's look at our intents we're in the intense space so what are some of the intents that somebody might have when interacting with an assistant around certification let me paint it a different way for those who are tuned to some of the other machine learning AI language think of these think of this as a supervised learning system and think of these as your sentence level classifications how many of these do we want? Well the question as Andross has posed really comes down to what do we want to recognize when a user is interacting with us talking about the open professions Well one question might be who is the heck is the open group itself so I've created an intent to call the open group I can add a description here which I didn't do it at that time like I would like to know I can see I'm typing on Michael's machine which is driving me crazy so anyway what are some of the intents here well let's look at this one I need to know more about open group certification how do I get TOE certification I want to get open group certified you know all of these things are good examples I don't think I have any synonyms here let's see but one thing I'll call out for this particular intent is you'll notice the questions are actually rather broad all of them are asking questions about the open group if you have a savvy eye you can jump back to that intent just for a second Andross that one yes ok so this one is a lot tighter around more simple what is questions the other open group intent you notice is sort of a general catch-all this one is much more nuanced around you know the other one was actually more about certifications so when you ask somebody about the open group it was intended to trigger a conversation around just what is the open group itself the other one was more about well tell me about open group profession certification and how are all the different ways that I can ask that question so I could use a synonym for a particular word like tell me about or I'd like to ask you those are all essentially equivalent phrases so what you try to do is build a corpus that leads you to the same response or leads you to a particular response so let's see here let's do another one certification pay like do I get a raise if I get certified will certification get me paid more will I get paid more by getting certified all of those questions are questions that you might have people ask and we have them ask them because we're tracking this information this may or may not be the number one intent it may or may not be the number one no it might not be here are some of the entities that you have for example architect discipline in the profession certification there are three different disciplines right now one of them is business you get certified as a business architect the other one is a enterprise architect and the last one is a solution architect or an IT architect so IT architect has the three different synonyms IT architect well actually four ITA architect and solution architect and basically all of those boil down to IT architect and you're following along from the general AI machine learning handbook this under entities you could think of as phrase level classification so you'll have one classification for the entire sentence to say here's what I think they're asking about and then you'll have phrase level classifications that say in the context of that whole sentence in that class here are the sub classes that we're applying right he's asking what is the open group and he's also asking about solution architect that's context that might drive the kind of response we give so if I ask about the open group this is the dialogue and the dialogue has a set of nodes and those nodes reflect different intents and entities and states so I can establish states by setting a parameter like for example if you have previously set up if you previously asked me about becoming architect certified I can save that state and I know that context in a variable now or I could actually just go back and look and see if the last thing you said about an architect you know was either about getting certified as a business architect or as an IT architect as an example and I can create state and flow within the dialogue so I can mirror what your expected experience should be so for example in this particular dialogue you come in you get a welcome statement you're expected to probably ask about what type of architect or specialty that you're going to get certified in and then you drop down into the intents that are specific to those particular types and they in turn provide you with information about where to find more context around those areas so here I've got for example we look at this generalized questions generalized questions are pertinent to both and soon all three certification open profession certification programs so there are general questions about certification within the open group under the open profession certification themselves so questions about accredited program certification conversion certification pay where I can find help with respect to a claim I need to get an extension on my certification where do I find the open CA FAQ and then I would also add one here for open six I need a mentor I need support in other words I got to talk to somebody how do I get a question to a human being so on and so forth you want to make sure that that last piece about getting to support is kind of the last option when using a chat bot because you're really trying to solve their problem before it gets to be a human and many of these capabilities at least the capability IBM has has analytics on the back end which is pretty important because you know the questions that you expect that are reported but when you end up with an AI system that is as easy as going to a URL you don't know what people are asking and the example on draws provided about the language between certification and pay was a question that people didn't realize we were getting asked as often as we were getting asked so early on we have a set of ideas that represent what we think people are going to ask and answers that we think address that but this system is a stake in the ground it's almost like a customer sentiment station I can hear questions and later on give you answers and insight around what do people really ask about maybe when you tell people that they have a direct linkage between certification and pay they get really upset and that might inform the way you create your messaging and perhaps the way you address some of your policies so in the case of asking about the open group like who the heck is the open group or just open group we've got nine minutes left on this section so we can buzz faster short on the last piece what this does is says hey I recognize that this is a question about the open group or the entity the open group and here is the text that I'm providing back and a URL and the image of the open group and in here I've got different additional answers that I can provide ask me about the open group, open CA open SIDS, ask me about open profession certifications and you can do that random or sequentially so this is the back end that gets trained on this let's pretend like I need to add and here I want to re-certify what do I need to do so the question is really I want to re-certify so I'm just going to copy that as your data curation step you'll see all the answers, all the questions you get asked but ultimately you want to try to refine that into a signal that is obvious to the system or more obvious would say and I'm going to add an intent and I'm going to call this ask, re, I can't see that does help if you've got the lesson you've got a space between the underscore and the R yeah I know so now I'm going to add an example I want to re-certify certification keep saying that that's misspelled why is that? it just doesn't know it's going how do I renew my certification? that's a good one now it's important when we're creating these kinds of examples to represent a class that we need to provide a lot of diversity if we just kept on with re-certification the system would likely be trained that anything with the word re-certify re-certification is immediately about this that's not wrong but then we'd miss the opportunity to recognize statements like how do I renew my certification my certification is about to expire I have expired certification so it's important as you're generating even for this rather simple example of a chat system you really have to make sure your data set is diverse enough to represent reality you don't want to bias it in one way or another so it fails to recognize all the possible inputs a user may provide so in this process I am setting context so that the profession is equal to architect so I know that the person is already either selected architect or specialist as their profession so I'm going to go ahead and go down here and add it to my add to the dialogue here a node I'm going to call this node to re-certify and if the bot recognizes the intent to re-certify then I want to respond with the text that was provided or at least part of the text in this case and finally I want to wait for the user input now so now I can try it in my user interface and as you can see here and this is just one way that I can test this out we're going to talk about modalities here in a second so when I bring this up it starts off with hello welcome to the open group profession assistant I want to help you with the open profession questions ask me about open profession certification or about specific certification programs like openCA and I can say I want more information on openCA or profession certification and what this test dialogue tells me is that well I think that the open group was and I probably misspelled something so let's try this again I'll drive if you'd like no it's okay alright there we go I did misspelled so you can see right here that it found that I was asking a question about profession certification and it returned the information on the dialogue for IT professions the open group certifications you recognize credible and portable validation that you have the knowledge skills and experience to get the job done and it gives me an opportunity to go out and read the rest of the profession certification for sure but in reality I really want to know more about openCA so I can say tell me more about openCA and it says you can find more information about all the open group professions at so on and so forth now really I got an error here I want this to actually give me context with respect to becoming an architect becoming certified as an architect or profession certification so I'm going to pick that and the model is actually going to retrain it says Watson is training so that I get the right answer here now part of the process here is actually training the AI so that you get the right answer at the right time and it's not only training it when you're developing it but training it as people are using it and we'll show you how that works in a second so there you go now I'm getting more information about openCA right out of the box there and let's see I want to get re-certified let's see here and I got an error and it's retrain so this highlights some of the complexity that exists because in this system you have on one hand a set of classifications at the sentence level another a set of classifications at the phrase level and the other sets that you train in real time as you develop the system powered by both of those sets phrase level and sentence level classes you have this ultimate the brain and the logic of chat bot and you know these systems as you're seeing all have separate components that might evolve separately the classification and the data behind those classes may change over time the logic that he's showcasing here may change over time based on the business process so even for this simple example of an assistant as we said chat bot was churlish there's still some complexity and you have expertise here people who know the questions that are being asked but it takes some effort to characterize what are people going to ask how do we want to answer them and then how do we connect those two to some experience that is meaningful so we collect data that's relevant and we give them answers to the questions that we think will resolve their concern this is one of the modalities there's a embedded web page that comes up with the information about profession certification and here I ask about open CA and I don't you know what's funny is that I'm getting different you know answers from when I was on my machine have you, did you move this anywhere? I didn't move it anywhere might be some of the updated training you're providing I don't know about that let's see here I want to get I'm doing this in real time kind of highlights the fact that I've made a change to everything and it's not actually providing the output that I expected so this is again representative to the complexity I just described we've got the set of sentence and phrase level classifications and as we add to that classification as we add more data and more classes we influence the underlying model so that a sentence that would have been classified one way is now classified differently and in our exercise right now we added a couple of new classes we edited some existing ones all of that has effects that you don't see until you do testing so in our case we're here showing you some of the raw stuff that happens and well at the same time showcasing the kind of business application that exists this complexity and under the hood view is not meant to scare you but just to inform you know the message we're sharing right even something as known as this kind of use case there's some amount of coordination definite amount of discipline shout out to DevOps because there are DevOps ways to test a lot of this to make sure that your system continues to be consistent this is where things like your reference data set can be used to validate that when we made a change does the system work the way we expect it to very similar to unit testing and traditional software applications yeah and nothing like programming on the fly during a conference but I can actually delete this and probably get the right answers but nonetheless there's also a let's see here we've got a slack bot modality where we can ask questions via slack bots here start over you can always just say start over and it'll start me from the top I want to be to get it said I don't know what the heck you're talking about you gotta ask me about the architect so I'm just gonna say open CA and how it tells me about open CA and I would like to get certified as an open CA architect and it provides me with more information about how to get to that there we go it tells me that I can get in a self-assessment tool and where to get information about the fees so that goes to show you that you can embed this in different types of clients I'm gonna have to actually figure out why that I retrained the model to break but hey that was a good exercise that at least showed you that you had to be careful about how you actually program these things but so prior to breaking the model I was actually getting all of the right answers about where to find information about open CA open SITS I could go down to the stream I could ask how to get certified as an enterprise architect which seems to be working right now well and what's noteworthy and meaningful about showing something like an integration with a chat application like Slack is Slack has a web experience a desktop experience but also phone experience so by having this AI system that we've now trained on this open CA, these open professions and we can now see it experience in different modalities and most AI systems are likely going to have different manifestations across you know web thin clients thick clients and even hear mobile and courtesy of a mobile device I have you know my mobile device providers in automatic speech to text this thing and get an answer back so these are some of the ways that you take these basic AI capabilities and plug them into what is the existing ecosystem and end up with very rich user experiences where you're not responsible for the entire implementation so it actually is finding the right answers now so I'm asking a bunch of questions about you know where do I find my badge where do I go to get more information about where my badge is that's in a claim so I've embedded here images from the open group about badges you can do all sorts of fancy things like that you can pass context in about who is logged in for example you go into the open group website you log in you can pass in the user information you could theoretically integrate it in with well not theoretically you could integrate it in with the certification system and have it spit out very specific information about the user that is using the chatbot just like you would with Iroca and the bank so that is all I have on the chatbot itself the artificial intelligence model that is using natural language translation understanding and it's building a context model for the corpus that we've created around the certification entities and the questions and intents that we have set up so we've got a picture we can show before we jump off this I know we've spent quite a bit of time but I think it's worth being in a bunch of architects so this is effectively what the high level architecture of what you've just seen right you have some user experience could be web based could be slack based could be a robot and they have some questions some statement we classify it we enrich it so we understand somewhat of what they're asking somewhat of you know given the context of what they're asking about what are they asking and then we come up with some answer based on a context of sentence level and phrase level classifications and ultimately we give an answer and hopefully they like it if not they yell at us and we figure out how to do it better next time this high level architecture is something that can be extended and interpreted different ways for different context in the case of OpenCA we saw a system built to be a bit of a customer service system right the purpose of this was to make people to give them ways to understand how do I get information about these open professions and how do I do it without having to bother him because he doesn't read you know hundreds thousands of emails right you don't need hundreds of thousands more but if you have a system that can understand what are these people really need then they can provide that to them in real time so as you see this system in front of you you can imagine in enterprises right other chat box use cases that are also common things like support help desks for IT as well as other support help desks for core enterprise functions both internally and externally and as Andros mentioned a great thing about many of these systems is you can integrate them with some third party system you know whether it's an existing enterprise SSO or some customer some customer experience so you know before they even say hello exactly who you're talking to and that can inform some of our dialogue as far as the answer we provide for instance the answer you give someone like him may be different when you ask about certification versus someone like me who's not yet but soon to be certified so now looking at another system which is similar but with a different context we're going to look at another question and answer system but in the context of Togeth 9.2 now Togeth 9.2 for folks who are aware of the open group there's a great open group standard that describes different ways that we can create and do enterprise architecture there's a lot of great expertise and many folks may even get a certification in Togeth to show that expertise one common occurrence though is that folks who are new to Togeth even if they're certified may have some difficulty in understanding how do I do a thing with Togeth they may be certified by these and good mentors but they may just not understand how to translate a real world issue into something like Togeth or they might know something about Togeth and they want to learn more about that something so the approach we've taken here is to take a similar thing like we showed with the chatbot but now oriented to Togeth and in the case of Togeth we have a document that's very knowledge heavy so what we did is run it through a data pipeline similar to what we showed earlier and create an application which latency permitting we can use to run queries against this application serves to provide some kind of insight into Togeth now keep in mind our core user here includes people who are experts at Togeth it may likely know answers to people who have never heard of Togeth before and they don't really know where to start but they have some ideas they have some words that they know that carry some meaning so the way one would interact with this is you take whatever question you have about Togeth and you know Togeth pretty well what's a good question let's see what are the stages in the ADM architecture development method let's be easy let's make it the long version architecture development method and let's see how latency treats us today so what's happening as we're sending this question right this question is being sent in this implementation as an unstructured query so this sentence is being diced up it's being tokenized they're using traditional search technologies but they're also using a bit of AI empowered search each of these documents relating to Togeth have been processed by AI digested into pieces of meaning segmentation is what we call that and it's been presented within an AI empowered search index so when you do a search and you ask this question it goes beyond just the simple text search behind the scenes we could implement more customized training that maybe when we see architectural development method we also look for instances of ADM we may also do the reverse kind of inference that if you see ADM we might extend that to be architecture development method whatever the case we do a search and we get back a set of candidate responses now these responses are again, chunks pulled from the Togeth standard itself in this particular system we've developed they're just provided to us with some metadata and a link to see the extension of that section we don't make any additional inference or assumption from there because this system is really meant to be a basic example implementation so we see our top item here with a score of 0.5 that's this system sense that I think this is relevant to you this system under the covers has not been finely tuned to know exactly what we think is relevant we've just loaded documents and worked with it in that state to showcase just where AI algorithms can be without much training so the first section we have is called building blocks and this is a section likely if I jump out here real quick from the Togeth standard itself and if we open it up we can see the actual text from this section the parts of our search query that we're found now in this case we're doing a fairly rudimentary fairly rudimentary search so the responses we're getting back include the tokenization of each individual word highlighted here there's also a possibility for us to take this kind of query and again break it up into a search just looking for passages where we ask the AI system under the covers to go a step further and not return us the whole answer or the whole section but instead to return us what it thinks are the most relevant sections within that document and so here within this section we see some description about the Togeth ADM we see some description of the different building blocks within Togeth ADM as well as general characteristics of those building blocks because we had to break it up there are 109 documents or segments of Togeth that match with the 93 positive sentiment 6 neutral and 10 negative I would assume 10 negative are like anti-patterns or something like that you know because this is a methodology but for whatever reason somebody wrote those 10 sections in a negative tone and that's exactly correct and what's important to distinguish here is this system in the current basic implementation again is not doing a very complex processing of the question so we've intentionally done that to showcase what systems can do without much tweaking what one can do as one evolves a system like this is progress it by adding more advanced analysis of the search phrase and I'd like to showcase that architecture here so this represents a possible extension of what we've just shown where instead of passing the question directly to your search index that has AI behind it you do several levels of pre-processing of your question this pre-processing can be thought of in a similar fashion as to what you saw with the profession's chatbot we're going to try to classify the question to figure out what manner of question is it and we're going to try to do phrase level classification to say you know he's asking about a relationship and he mentions TOGAF and he mentions ADM there's an implied relationship between those two concepts that we think is meaningful so to showcase that at a very high level I've actually got another very simple system here and this very basic system just accepts a phrase and attempts to digest it using a very shallow set of training data that we've provided so in this case the training data here is around asking some of the similar TOGAF questions so this system is trained on our custom sentence level classifiers but it also has phrase level classification that's untrained so this is stuff pulled from things like Wikipedia so if we send this question out what we get back is a classification of our question again this is based on the training data in that spreadsheet it's very shallow we trained it that based on questions like this we think you're asking about inputs we think you're asking about at what point in this ADM process do your business continuum requirements serve as an input at the same time in our untrained model that does Wikipedia style analysis of phrases it picked up ADM as an organization now we know that's not right we know that has different meaning but this general purpose model that we're showcasing perceives ADM as the acronym representing a company because in most other contexts like IBM like TOG for the open group does represent an enterprise or an organization so if we were to take our search system on top of TOGAP and evolve it further we would do an approach like this to try to understand at a deeper level the semantic meaning of what is a person asking about combined with about what are they asking and use that to do a much more targeted query so another system that we'd like to at least bring to mention as another possible example about something with AI aligned towards some of the business of standards so unfortunately we don't have an easy way to demo it is a conference called transcription system within the work of standards development there's a lot of great dialogue you've heard some of it here at the conference today people presenting, people sharing ideas right now there are folks in member meetings having discussions about problems and having spirited dialogue about it's this way, no it's that way etc. an important part of generating standards from that kind of activity is taking notes taking minutes on recording what people are saying and distilling that into a sense of significance where you say so and so from IBM said this and there's the implication of it so and so from another company said that, here's the implication or the action inferred from that so in a current process these minutes are generated using an artisanal process, they're handmade and there's value in that because there's a lot of context required for many of these minutes to be meaningful, you have to know what the group is doing to understand what's the significance of what someone just said so another way this could be empowered by AI would be the use of AI for audio transcription so in the case of a system that you know it exists on my laptop but not something fun to demo you can play audio to the system and the system can transcribe the text of what people are saying in the bank's text associated with that meeting this represents a fairly low hanging fruit when it comes to AI application where things get interesting and potentially a little scary is what you do with that data because in that data set you now have a lot of insight coming back around who said what, what is this forum talking about, in this forum's meeting yesterday did they talk about that forum and those kinds of insights are valuable and useful, though for our initial example here we looked at it more from a cost-saving perspective of figuring out how to help automate the process of minutes so really for many of these systems it comes down to understanding the question that you're being asked, as we saw this is not a trivial thing but it can be addressed if you're very intentional and narrow about how you want to scope your efforts, in the case of the chatbots you have the brilliance of your chatbot engineers of your business process owners, folks who know the domain well like Andross and some of the staff in the open group, in the case of something more open-ended like Togaf you have a sea of insurmountable questions the ones you know people are going to ask and the ones you don't know and you can attempt to use AI to create intelligence around that and you can also attempt to use a little bit of staff expertise to train the system to be able to decompose those questions into some kind of meaningful query but ultimately you go from that question from that unstructured data into some kind of structure that has deeper insights that enables things like entity resolution that enables us to understand that when someone asks a question to our Togaf system they're specifically talking about a particular concept within Togaf that we can have the system meaningfully assert is part of this section so at this point we've sort of concluded the live demo session and we've done it probably 21 minutes over so we've got about 9 minutes I mean one of the things that I didn't show you was that we can actually get information about how the chatbot is being used and here we have analytics that show the conversation and the amount of conversation usage and the top intents and the top entities that were utilized and this gives us kind of an idea of whether or not we're getting the right information about that particular intent we can select the open group and see what the context was there we can see that there were a total of 12 conversations about the open group and that the conversation over the last few days dipped and then increased in context so we can also take the logs and look at the types of questions that are being asked and determine whether we're getting any errors or not and then we can retrain the chatbot based on what you know that particular data showed and you'll see some of this process might remind you of some of the work you do in software development you have it in a point where you think it works and then you just have to continually analyze it see how it's working, modify it and update it as you need to and this is where we see a lot of importance on treating that as a separate formal process hence some of the guidance we've given so far and some of what we see with many clients today so one of the things that you would do is probably pretty this up on another interface maybe even change the modality so that you could use just natural language recognition and one of the other things that we can do without any kind of effort whatsoever is run it through a translator and go back and forth between English and another language and talk to somebody who is a non-native English speaker so they can say I'd like to talk to you in Japanese or something like that without any effort whatsoever and I don't have to spend any money on translation now that approach is one approach to localization IBM has done both in some of our larger enterprise systems we try to compare answers in the translated language to answers in the native language itself but based on the complexity of your system the translation upon question received will likely work for many use cases and there are some parts of at least IBM where we do use that to take advantage of providing multilingual experiences without having to invest in deep translator expertise for a variety of translation tasks or you could just do the translation and see if it's right and tweak it so you don't have to spend a lot of time with it that's the other thing and you could use the information that comes out of this as analytics to tell you about what the sentiment of working with the open profession chat bot is certainly and though for many of these AI projects you initially build them with the intent of addressing a business problem like automating the creation of minutes or saving the staff from having to respond to a bunch of important questions that have the same answers but as Andras mentioned over time you'll create data that allows your enterprise to do the next step that maybe you look at all the questions you're getting and determine that hey a lot of people need guidance on one particular part of OpenCA perhaps we should change or add additional guidance to that section right perhaps we should evolve our offering and the documentation for that offering similarly to the Togaf example if we find a lot of people are asking questions about something we don't really address in Togaf maybe that now informs us that that's a new area of extension an example of how organizations truly become data driven is that you have systems that feed on your data thrive on your data and generate more data in turn that you can use to evaluate the next best step for better business outcomes alright where do we go from here we've got some lessons learned to share in the last four to five minutes and I'm sure we can jump through those relatively quickly oh bias my favorite and brand identity so bias my favorite example of bias recently was a situation where a company was actually using AI to find the best candidates which by the way we do actually do an IBM we score your skills and we actually are trying to rate employees based on AI assessments and those assessments actually use things like information about the skills that you attain through a claim the classes you've taken within the formal training you've taken information about your social eminence how many assets you've contributed stuff like that don't forget your certification and your certification but recently a company actually was using a model like this and they found that it was being against certain universities and potentially against women and you know when Michael and I really dug into their model the AI was doing what it intended to do find the best candidates but it was using some data that it got from a few sources that was leading it down the wrong path from training but in reality did they really need to have gender at all? Right and that's the question I mean this example this organization looking for the next big tech talent and it just so happened in their existing data set of the people they had hired many of them were men and the AI system which they didn't have hands on everything kind of reasons well what do these top performers have in common one of the things happens to be that they're men so what ended up happening when people were submitting is it would look and recognize mention of gender and score that as a not negative thing in the case of women but rather just a positive thing in the case of men because the system just blindly reasoned there are ways to handle this and I kind of had a laugh when I learned about it because they're relatively well known there are ways to hide the features that you don't want the system to learn from so the system doesn't seem male or female it just sees strong technology background maybe leadership maybe you know they maybe they're in the arts as well right got to have your balance but when you give it data that's not rated or shaped properly the system will pick up on weird things I mean I was a little sad to find out that the system didn't reason that people with dogs end up being better employees yeah in my sample size of one that's the trend I see but in reality to you might want some bias in the system certainly you might want to have more minorities because you don't have enough minorities and even though they aren't really you know kind of bubbling up to the top of the performers list you want to promote them or score them higher to assess them sooner in the cycle so in some cases you actually culturally want bias put into the system so you know it's a little tricky dribble right yeah I mean in the case of the tech company again they might notice that their best performers have formal tech backgrounds that just might be unarguable and I know some people say hire the bachelors in history and not to say they're bad programmers but let's say their data today says all of their strong folks have computer science undergrads and maybe masters as well but you want more diversity because you say we need people who don't come from strict tech backgrounds but learn tech later you might have the system say if you're a non-tech background like Andross was mentioning let's give you some extra points let's not exclude you immediately or let's score you in a separate pool with separate parameters that being one of them yeah that's one way of doing it so you look for top performers but maybe they don't necessarily have you know tech background or the same tech background but they're trainable just like the system yeah okay so we're formally on the hour but I think we can check through what we've got so you know this is bias comes in different forms and you know some of the challenges that you face for deployment of these things into the enterprise that kind of boils down to trust and transparency is around the difficulty in integrating it into the business applications themselves managing the internal policies the resistance to AI and lack of DevOps or those skills that we talked about earlier that aren't necessarily readily available you're going to have to train them and maybe not even understanding the analytics of the data itself right so you know there's three different or four different particular roles here that we really have to focus on we talked a little bit about this before but there's building the solution so the data scientist's role the solution the creating the solution which is part of the software engineers responsibility and this whole idea of AI management and business user coming together and those folks have a lot of responsibility and trust and integrity to make sure the model is working the right way so we did come up with a project that we call OpenScale that we're inviting other companies that were open sourcing and it's really kind of scrubbing intended to scrub your model make sure that you're not putting an unintended bias into it things like adding gender or ethnicity or biasing against a certain school let's say that you know you're all of a sudden your AI model starts picking West Coast schools versus East Coast schools if it doesn't know anything about them then possibly it can't bias selection you know bias folks who go to the University of Virginia versus UCAL Berkeley or something like that so OpenScale is intended to actually look at payload logging from an integrity point of view making sure that there is a visibility into how the model's performing operationally be able to more fully explain the model define some tests to determine fairness so it generates data for you to actually run through the model and then creates that model ops piece that we have been talking about and this is really important because bias has actually become an inhibitor to use AI as people had looked at unintentional use of gender that was included in the model previously so how does AI impact your brand Michael? Well you know as we saw the couple of interactions we had with the sample systems we showcased you have whenever you interact with an IT system representing an organization and providing expertise attributed to an organization it in essence becomes a representative of that organization this is why when I work with any customers to build any AI systems that are externally facing I always have someone from marketing in the room and I always have someone who represents their business transaction or interaction because these AI systems ultimately define someone's experience like if someone interacted with our chat bot and managed to confuse it they'd say man this group is crud this certifications no good and we all know that's not true but what they had would give them that sense they would walk away with that same idea if they interacted with someone with the open group who was just as rude because that's a frustrating experience now if I get questions I'll send them to Michelle because I know she's nice so they'll like the open group but this is a key thing you have to understand all of these brand touchpoints becoming automated still has huge implications this is why we're big on human in the loop for most things because this human experience that AI creates again it's going to define that market that brand identity you have in the market so there's a few approaches here there's five to be honest with you obviously we've showed two of them the customer service interaction with chat bot and enhancing the work of the knowledge worker getting insight into the structure of TOGA through the use of IBM Watson Discovery so but there is also managing complexity and risk so we integrate AI into things like Watson cybersecurity so taking massive amounts of data that's coming out of the enterprise on how your security is functioning is certainly a good model using it to find the best talent so we do actually use that in our talent systems within IBM and to empower developers to actually create AI based applications themselves in our case we showed a few example applications but in reality you would build up the underlying AI systems and then that's a thing you can integrate with for other experiences you could take that chat bot and have it a phone line that one calls in for example instead of just a web based experience so I mean here's just some other examples of some of the ways one can get started and work but again it really follows pretty much everything we've been describing the way you can sort of improve some of these business processes target what's critical to your enterprise and figure out do I have data around this and can I apply just enough AI to get started preferably AI someone else built where then I can just focus on deriving value from that interaction your subject you really like well when it comes down to these systems at the end of the day the architecture still matters I mean in our example of showcasing some of these few sample applications right we edited a model on the fly and things got crazy in Harry very quickly that's all realities right these systems have an inherent complexity and maintaining them and making them successful still requires architecture architecture from the application perspective architecture from the data perspective from the model perspective in the enterprise systems that we've built doing even chat stuff you've got multiple environments you've got some kind of you actually have a change review board a change management board I know we think it's an ugly word but you know those processes still serve value and architecture is a core part of that yeah certainly true and we tended to throw architecture out with the you know the baby with bath water kind of analogy we went we went to agile we went iterative we created this idea of minimal viable product but the product is not really viable and it's sometimes not even minimal but all the data is generating building up technical debt it's not the right solution so you definitely have to think in terms of the illities because right now agile and design thinking is all about outside in and it thinks of it in terms of how the user wants to interact with the system that's great but as we know you know from the open group a lot of the success of your system is all of the illities the 40 different illities the non-functional requirements that are necessary to build a system that's maintainable and you have to begin to think about the architecture from the inside of the system out instead of the end user perspective which is mostly what we're doing these days I think that's it yeah wow we made it to the end any last minute questions where we let you escape it is 509 nope Ron so if I had actually created an intent that was bridges versus badges I can actually go back and fix that in the model pretty easily if somebody actually retrains the AI network to somehow you know befuddle badges and bridges then I'm probably going to have to fall back to a past corpus yeah I mean it comes down to data curation right this is why Microsoft Tay suffered such an untimely fate if you let just anyone you know adjust the model you're going to have a bad time and this is where I brought back the comment of devops you want to be able to move these things quickly in the case of our internal version of the nameless chatbot we showed we've got our IBM's global career team who works with IBM's technical career path and once a week they look at the data once every two weeks they propose changes to each other and once every month or so they actually make these changes sometimes they make them on the fly but you know we sort of empower them using devops methodology to make a mistake and if something goes bad they have a button I put that they push and you run your devops pipeline and you take the old stuff and you know throw it away and put back in the put in the new stuff or vice versa right I mean it goes back to that same fail fast fail forward methodology in our case you saw right the caveman version of it people typing oh I think it should be this or whatever and if you make a mistake going back is really hard but with devops practices that we all know and love doing rapid iterations you can start to address a lot of that complexity