 Quick background. With the Stanford undergrad in computer science with a focus in AI, I also got an extra part for the religious studies focus in ethics, and in that case, the study of right and wrong. They seem very different, but when it comes to AI, we'll talk about more about some biases, things like that. Where's the future of AI going? It actually has a lot more in common for my thing. Did some years of consulting, got my MBA, left the United States for a while, went to Samsung, then came back to the States worked for Beats, worked for Apple, for the repetition of Beats, and now I'm going to start up on item plus. As for a lot of the stuff, we do really cool stuff with AI now. That's most of the events that's in the excitement that's been in the last couple of years. As far as product experience, current product experience with AI, that includes some fitness, health apps, music services, voice assistants, and tech space assistants, and I'm also in a price range, so I work on some consumers, and I'm in price stuff, a couple of different products. And I know Apple's a big name out here, but I have to promote Samsung because I'm a Samsung fan. I'm not sure. And also, you can interrupt me in just a minute, and we can stop and pause and go off camera. So quickly, why am I talking about AI? Given that AI has been around for a long time. So like I said, I started with AI back at Stanford a number over a decade ago, nine of those, how many specifics, how long ago? But it actually started in the 50s, really. So I kind of had a high cycle, we're like, yeah, I'm going to pass a training test, and then it kind of like went down, after I was like, yeah, we kind of got there, but it wasn't that exciting. And now we're coming back in a lot to do with advanced machine learning. So I'm hoping you're going to go through that content. We're mostly in advanced machine learning and really in advances in deep learning. And deep mind, you're going to learn that from Google and other stuff, like machines teaching itself things, how to run, how to walk, how to draw, these kinds of things, those advances is really where we're talking about AI here now, and why it's becoming, you know, back into the high cycle of going back up here. So there's actually a spectrum to the AI. When most people say AI, and we're just like AI, we're talking about a narrow version of AI, and what narrow means is focused in a very specific type of path that they can say, I can only make flights for you. I can only book flights especially for you. I can only do open a coffee pod and pour you tea, or a coffee every morning. I can only do one specific task. You know, so we call it narrow AI. You can do it very well, but it won't do that well. And as general AI says, this is where we become more like human life, it's a combination of skillsets and domains, and it can jump between those feelings. What it lacks is the ability to kind of make its own goals, have its own wishes and desires. And that's kind of what we get to super human AI, or what I like to think is more of a selective AI. There's been a lot of movies about this, it's kind of like super human AI, her, ex-Makina, and more recently, and then there's a bunch in the past too, and I'm a lot of other things. But I actually think that kind of AI, that singularity AI, the AI that kills us all in the movies, it's more of a collective, kind of, not a hive mind, but more of a collective, a lot of pieces working together in harmony, as opposed to anything like, at the end of, I don't know if you saw the movie, but it's well over, close to years for like 30 seconds, if you know what I'm hearing. You know, the AI develops lots of relationships, it falls in love with lots of different things, or AIs, and around the world, and maintains simultaneous relationships with it. So I think that's kind of what we're headed more collective. And I can get really philosophical about AI, but we're kind of gonna jump back up to, or at the top, it's kind of narrow AI, that's the main focus, that's what everyone's really doing right now, your Alexis, your Google Assistant series, they're all basically more or less narrow AI's. They're trying to become more general. So yeah, stop me if there's any questions from the back. Does general AI exist out there, like in some super computer, like in some government's hiding? No, I mean, I think in the academic community, the answer is, in the business world, some people say that they have AI, I wouldn't trust them. So coming back now, there's some kind of examples of AI, as AI is already using it, athlete of the day, maybe recognizing what you know. There's chatbots, again, sometimes you don't even know you're talking to a chatbot, sometimes it's very obvious, customer support is one of those things, kind of starts out like, hey, how do you do it? And then it quickly like, oh, I want to do this thing, sorry, did you repeat that? Now you realize you're talking to a chatbot, not a person. More recommendation engines, I'm gonna fix Amazon, they've been doing this for years, years and years. Predictive services, every time you go, if you're about to take X amount of time, that's an AI that's calculating things, and it's making a prediction about what the future is, and we're making a decision about where you should go. Spam, what is spit-mil versus batting? Those are decisions made by algorithms, that's AI. Self-driving cars, your phone, when your phone, when your Fitbit knows you're walking and running, that's the machine learning algorithm that we train, model it and it's decision, and that's AI. I kind of missed this point earlier, sorry. When I talk about AI, when I'm talking the intelligence part, it's the making the decision part. If there's no decision to be made, then it's not really AI, it's just kind of like the automated process. But once it makes the decision, when there's a branch in the flows, and that's the side of it when you're left or right, that's AI, and now there's intelligence that's the decision being made. Sorry for escaping the curve. The kind of more advanced stuff was in your series or Alexa's, these are controlled assistance, it's a term that I made up, but I like to call this. And what I mean by controlled assistance is something that you give commands to. Do this, okay, I can do this. Do A, do B, do C, oh, I can't do C. And then it's kind of like that, that's your relationship with that kind of system. And then you have the uncontrolled agents. And I call them agents because they act on their own, and that's a term for it, they act on their own. And this is things you see in games, you have a team of five that you're running around shooting fighting the aliens, and the other four guys are kind of doing their own thing, and they'll figure out which guns to get, where to go, who to shoot at, they kind of do their own thing, you don't control them, you don't tell them where to go, and how to act. But they have rules that they follow, and understanding of the world. The last piece, not so much different from the above, but there are different blood use cases that happen for enterprise and government that are more behind the scenes, such as security, facial recognition for the infighting people, allies and all learning authorities, fraud, detection, making financial decisions, so at least not behind the scenes things that you don't see, but your life is being impacted every day by AI. Could you explain how Uber is an uncontrolled agent? So I believe Uber is in there because it makes it, if you use the word pool, and the deciding of like, oh, these three people are going in the same direction, kind of, so I should put them together in this order. You don't have control over that, and also Uber doesn't have control over that. They're not telling, even though for every riot request that comes in, hey, do this and do that. They created this AI that looks at certain circumstances and rules and what I'm saying with the world and says, I'm going to make this decision and people are going to have to live with it now. And that's what I mean by uncontrolled agent. You can't tell it, don't put me with this person. Well, you've kind of clarified that before. I think a different way to look at that, currently if I'm wrong, is when you say controlled assistant, that means controlled by the user, provided the person interfacing or acting with it. The uncontrolled agent is somebody else who's set the balance of the control, not you, the person interacting with it. Because even in the Uber or the game space, someone has set an algorithm or set of parameters that aren't going to be gone outside of it. Someone else in Uber in this case is setting up who decides what order they go in based on their efficiency and their revenue model. I'll elaborate a little bit more. I think you can also set balance, and as we get to more intelligent AI systems, this is going to become more important, how we deal with the ethics of this, where I gave it rules, but I didn't know it was going to go through the same thing. I gave you this example of self-driving car there, the one that crashed in Vegas an hour after it started, where it kind of had a lot of understanding of the drive on the road, but it didn't know how to deal with drivers to have bad behaviors, so it stopped, and this truck was backing up and it's honked. It's like, all I was doing was honk, you know? I know I need to let people know that there's something bad happening, but I don't know what else to do. And in a real world, a person would back up or try to move out of the way, or yell, you know, do something else. So this uncontrolled agent, and he kind of said, here's the code, go out there and do stuff, but I don't know everything that it might do, and how in my reactable situation, where it's controlled is in very specific, do X, and then you're going to do X, but you're not going to do anything else. You're going to drive straight, and you're not going to decide to change lanes. You don't have any freedom to make decisions in that kind of way. You're going to do what I tell you. But it knows it's like staying in the lane, and that's AI too. It's kind of like a little tricky, and it helps that it makes sense. Yeah, we'll come more or less. We're going to have a massive discussion about AI. But this being product school, I am a product manager for a number of years now, doing AI. I do have a very technical background, not everyone has to be very technical, but one of the things I think it's beneficial for anyone who wants to work with AI in products is to have some basic understanding of how it works. At a more granular level, it's just looking at components involved and why they're necessary. Because then you can start to make decisions about why your system can compete against a Google Assistant or a Siri, and things like that. So it's kind of like some main things. It's not an exhaustive list, but it's kind of like the main stuff. So ASR here is speech recognition that takes the words up coming out of my mouth in terms of the text, and hopefully with a lot of accuracy. And we've gotten pretty good in that. It's basically soft off the ground. TTS, that's the text to speech, it takes the words and paper, turns them into voice vocal sounds. NLP and NOU kind of use interchangeably, but it's the slight differences there. Natural language processing, that's NLP, kind of includes more grammar and understanding, like there's a verb here, there's a subject here, and there's a word here that maybe similar to this other word, like car and automobile. NLU, understanding, natural language understanding, may add other things to that. When I say this is good, that's a positive thing. All right, that means do these other actions, and then it adds intent to what these are saying. I just understand it with the words. I just start processing the words when I ask meaning to those words. It's something that we do all the time, it's humans, we don't think a lot about it, and that's how you get misunderstandings about it. I understood it one way, you understood it a different way, and machines are gonna, AI's gonna have that same problem, and we might have to work through those problems. Once I understand it, then there's kind of some action happening, right? Buy tickets to this movie at 9 p.m. Okay, well, I understood that there was a movie, and the action, the action's buy, now how do I buy stuff? That allows me to program logic. All that uses a concept called skills. So all the skills that are new, it's still logic to do all these actions. You have knowledge bases that support that kind of information, except that one movie, what's the new movie on the last Jedi? Well, buy me tickets to the last Jedi, and you have to be able to know what last Jedi is, somewhere under the database, somewhere that says last Jedi's movie. Makes it kind of just memory, short-term, long-term, my name is, just on, so that it's nice to be stored somewhere, which is a little different from context, which is maybe what's happening around me right now, which may not be stored, but just be understood. I want to go home. The context is I'm starting from here, but I don't have to say that I'm here for a student. I have to pull that information in. It's not information as a story, where I can go find it and get it, but as context. So that's kind of like some, highlight a quick one through of some common components used in AI, and then really the secret sauce of every light, different acts of different, or a new kind of different poles, is they kind of put these together differently. That's the architecture decisions that they make around this, what happens first, and what's afterwards, and where do you get your data sources from, and so on, there's data sources. So understanding these things can help you think about how your product can look different, and I'll kind of, maybe ASR or TTS, Texas Beach. It's also a solved problem. It's very easy for machines to read the words on paper, but to make them sound more human, that's a much hard, difficult problem, or how much money do you want to invest in a person actually reading every single word, and every single language, and every accent of that language. So there's some boundaries there, but maybe you can make different to your product because no one's doing it in Russian, or somewhere else, in some other language, or some other dialect as well, and it will validate your product because it has a very TTS, not because you don't even think about it, but you can make it different from Google or Siri, or et cetera, or Lexi. Question? Yeah. I think NLP is very passionate about that, and I know there are like technical libraries in Python or R, for example. Are there any, are you aware of any libraries or NLU in Python or R, or any other technical ones? Or are you self-taught? Um, and I put them together because depending on how you structure this, the processing of it, a lot of people do kind of all of them together. So some of the libraries are similar, or are usable. Lexi. I would say for how my company does it, there aren't any libraries that can build everything from scratch. We don't use any actual external services. However, that being said, there are, well, actually, the game product. A lot of the startups that are creating these, you know, like services that you can use so you don't have to build it yourself and aren't getting bought by Apple and Google and other startups, you know, like startups, Facebook's probably gonna buy one soon. Salesforce.com, they're a new company. So, you know, a lot of them are getting bought up right now, so there. So if you wanted to build, like, a second-in-the-house system, would you go and pay NLP or NLU? Well, I would both. So like NLP will tell me to pull out what words are good, or what the words I should pull out were. I feel, okay, this word, then whatever snag coming up where I feel is an important word, pull that out, that's NLP. And then once I get that word, how do I know if it's good, bad, or neutral, or, you know, happy, sad, and, you know, have some understanding of how we do that. So, I would say that's fine, but I'll put them together. It's kind of, they're not too far apart from each other. So, we're sitting here that there's a, well, to make the simple, a different one, but the drives and actions, what kind of, there's some scheming and some kind of, you know, collect this many of these things and then from this we generate that understanding that we can help with the actions. Yeah. Let's move to the next slide, maybe this is my copy. So, context is one of those things, the last two, seven, two, one, there you go. When this input comes in, so I'll say like right now I'm like, the user said this, but at the same time, you know what time of day it is, you know, patient, you know, what's happening in the world, some other stuff that I won't get into, that's probably the rest of your sauce, maybe we can pull all those things together with what the user actually said to know this, like, you know, help us classify. And so, work through a little bit of what I mean by kind of doubling to your point, like NLU stuff. So, training, there are a lot of libraries for training and you can, when you have TensorFlow and that kind of stuff, you can use to help, we'll get your models going, you can help train them. And what you're gonna do is classify, buy tickets for blah, blah, blah. That's an intent for movies or movies, buy new tickets or whatever you want to classify that intent into. Then you need to extract things out of that intent once you've classified it. Or you can do it over here, you know, kind of to you. You need to extract information, which will be, what time, how many people. And then that's why you get questions like that when you ask a lecture or any other system like, oh, do this thing. And this is missing information, the subtraction piece for the national follow-up question. Yeah, one time, how many people went to the center or something. And then it takes all that piece of information that is extracted and after it's classified. I know what action I want things I want to do. I have all the relevant information. And now I have some understanding. Maybe I acknowledge, I know what this, what kind of movie this is. And now it can act. So, the actual buying of it maybe goes to movietickets.com, where there were services or APIs that helped it act. I think these pieces are areas of improvement. So, say, buy a movie ticket for nine people is very simple to buy concert tickets for two people. And so, on the action side, it's like, oh, I just got tickets and I know how many people. And this concert name sounds a lot like this movie. So I'm gonna go buy a movie ticket. And I'm like, no, no, no, no, no, I don't want a movie. I want a concert. Then that's a correction. And a lot of systems don't do this at all. You have to kind of restart it over or you just can't do that. You can't follow that flow because it's gonna make the same mistake over and over again. So there's a lot of work to be done here in correction. And as well as kind of we learned now that I've corrected you, how many times do I have to correct you before you learn? And this is what keeps us from really getting towards the general AI. This is these two things. It's very easy to go narrow. I'm gonna get to this part. But it's very hard to do these other two pieces. And that's where the general can switch between the means and like learn from these past mistakes so that it doesn't make the same mistakes. This is where we started getting this right. It was as close to towards the general AI. And then once you get that, this is closer to the super AI, you know. That's a new piece. One more question. You mentioned about your secret classes. What's the direction of this development? Is there eventually someone or something or something who can allow these kind of tools to allow it to make, maybe it's open source, maybe it's not to allow us to kind of build our own applications or it's just kind of like, we're all going to buy all these different companies that we can lock in the way and touch in. No, no, no. The buying of the companies is so, cobalas have to invest or it's face-to-face and Apple doesn't have to invest as much in the research that someone else has already done. What they want is make this really accessible and available so that everyone's using the cobalas version of how all of this works. And then once you do not have to worry about where you're at, I don't know, simply looking at paintings in the background here, that's an AI that helps people find the right painting and put it in their home. And then you don't have to worry about like, oh, do I need to build a classification model and then train that over and over again in just a part. But like, no, we'll sit here, we already have this for you. And then you can take that module and then put it in your own app. You can think about other products as well. So I think that's where we're going. The issue will become, and then going back to some more philosophical please, sorry for you guys, I'm not like trying this way, I was like, we need to go left a little bit, so I'll go like this. Is then, do we want to have all this information within a few select companies? Or is it there for it to be more broad and spread out? It's kind of the same thing to see right at uni. Or should you have an email account with liberal and you're also in your photos or with Apple, you know, et cetera. You can do everything with one company or should you have other pieces so you can spread the knowledge around. Because then it becomes like who's controlling what? Especially as we get AA and get smarter and start speaking more decisions for us. Are you controlling the AI or is AI controlling you? Or is the AI controlling you? Is it by a corporation? Is it the government? Is it something else? It's only more philosophical side. Other problems you can solve later in the session, but it's something I think about. But right now they kind of, they want you to use their platform, so I'm trying to. Because everybody has a platform, but they're like main layers and they have a platform. Let's say, for example, I'm with a digital assistant for the services for an app, and I have to develop it for every single platform as well. Like there will be one instance for Alexa, one for, like is there a group or something that you can just grab for something once and then every platform understands. Yeah, good question. And that's kind of the part of the race too. Can make your platform so big that it becomes a new Windows or an Mac or Android, iOS. So it's just a lot. Right now we're going to have that. Mostly because, I mean Apple just recently opened their platform, very, so you can add I as an app developer can use Siri to help bring a voice agent to my app. So that's what recently happened last year. So there's not a whole lot of it. Alexa, when you can say has the lead on this, but most of the skills aren't used. So then it's really the big platform as a kind of like a platform that some people use sometimes. But in short, yes, you can have to go across the board unless you go to your own, that you use is diagnostic, but then you have a distribution problem. If you come back to that, then another step. We're talking about a couple of big companies. So AI is not just going to be, I think only Google, Amazon, IBM, Apple, those kinds of companies. They're struggling with a lot of things. They're trying to, if you think about how, if you need to set up, post a website service or would you go, you know, a bunch of machines yourself or would you go to Amazon? Web services, probably go to Amazon, Web searches, you know, infrastructure, you know, is easily cheap to buy. That's where these companies are kind of headed because we need a lot of data to power deep learning. And deep learning isn't, it doesn't apply, a lot of algorithms or some things that I don't learn, machine learning, a basic machine learning algorithm, once you give it enough data, kind of like a state glasses, like a cat to cat to cat. But deep learning kind of, it doesn't have the sync I told because it has the ability to kind of learn like something, some guiding decisions about where it wants to take a thing. So maybe it can draw a new type of cat that's never existed before. So that's the advantage that deep learning doesn't have the sync on time. So when to do that, you need so much data and understanding about what a cat is and what's not a cat. And so that all can gather all that data and painting it and making it accurate and things like that takes a lot of time and effort. That's where those companies are focused. Again, the machine learning, this processing time, the reason why I stopped doing it over a decade ago is because it took me a day, literally 24 hours, to process, you know, simple changes and to see if my algorithm went from like 2% accurate to 92.5% accurate. And I was like, this is too slow. So then I was like, that I got an MBA and I got into business. And you know, but now we're doing it much faster, but it still takes a lot of processing time. These machines, these GPUs that they run are already expensive. So another thing, kind of the narrow AI versus general AI. They're really focused on how we make this more general. They have all this data, they have all these resources, have a lot of computing power. How can we make these things more general? Cause right now it's still kind of in this narrow place. And even in my view, self-driving cars are pretty narrow. They're somewhat general, but you can't take all the AI in a self-driving car and then tell it to, okay, now pick stocks. It doesn't work that way. Well, apparently it doesn't work that way, but that's where we want to try to get to. That's what I mean by generally. You see, I now can like, sure, I know how to drive a car and I can learn how to pick stocks. I can learn how to do this. I can learn how to walk a dog. But that's something human can do pretty easily, but AI's in it. Large companies, they kind of get stuck in their platforms, so if you're working at a smaller company, you find a place to put holes in that platform and get into that niche and that's how it works. I don't want to talk about this in the next slide too, but also there's biases in everything. People write route loops. People tell the machine what's right. And those people have different cultural backgrounds and different value systems and they will guide those algorithmic machine learning that AI to think like it, like me. And even if I try not to say, don't think like me, all I'm doing is teaching it the things that I don't think like, which is another bias of that I have, right? Because you think differently than I do. They're still biased because the data is not biased, but the way that it processes the data is biased. You can architect, how you architect the processing, right? I mean, just what does learning mean, right? I have to then, if we go back here, there's a lot of things. I mean, I don't want to go on a side track. Are you going to cover that supervised stuff? I started with it earlier. I didn't really go deep. And we can follow back for two, but I'll quickly try it. The supervised, this means like the data isn't trained, you know, and then you have to kind of machine cluster the information that's on. But in order for that to be useful, it has to do certain actions. All these kind of things. That's the classified, I think it's this cluster. I think it's this cluster. And then it has to understand what's in that cluster. And that piece, all that work is biased because that's someone telling you like, hey, do these things or use every third word. Why not every fourth word? Why not use the first two and then skip three and then come back, you know. That's why, that is a bias. So that's why they're still getting bias, even bias. Quickly. Just dancing on top of one, I thought I was going to talk. Smaller companies need to be in the space. I'm already mentioned this. Use someone else's platforms for soft models. I mean, the speech recognition is pretty good in studies. I wouldn't encourage you to all of those speech recognition unless there are some, if that's your competitive image, you know, somehow. Because like I was working in a project and we needed to do the UK. You know how many English accents are in the UK? You know, so to make sure that our speech recognition was better than what's in the market for all the different types of accents in the UK. So that was the reason why we wanted to trade this question. But you may not have that use case. So you may not need to do that. You might need to do that as those resources. Again, try to find some of these unsolved problems, distraction, and knowledge resolution. Like I said, it's really easy to take distinct things. I want to go to this movie at 9 p.m. and I've got a movie, I've got a time, I've got a number of people, I've got some movies. But what if I said I want to go to movie A, B, C, and D? And I want to go to any day this week. Or I want to go to these days. Monday, Wednesday, Thursday. Machines are really bad at understanding things like that. Because you know, I got multiple instances of the same type of object or concept of a movie. So how do I know when the first movie ends and when the second movie ends and the third movie ends? They're kind of all movies together. There are machines on the ground that are intelligent that kind of fight. I kind of do one thing together, but once I get them all together, it's just a micro grocery list. Oh, by apples, bananas, and quiches. But I know you can say that's a new system. We'll just want to know how to separate. I mean, there's some tricks you can do, but it's not true. That's not true when she knows it's not true AI. That's rules and advances. Find a specific word to my quote on that. But in a general sense, if you think it's really AI, we don't do it wrong. So these are the examples of unsolved problems that you go out and do well. You probably get balled by a much superior company. They'll say, okay, thanks for solving that problem. I'll add it to our system. Again, finding the ways that it's working in. I don't think anyone's working on an AI to, again, pick out art to put in people's homes. That could be a great good name, but not me. Art can be very expensive, because we don't have money. So it's something to think about. And then making your superior UX, how you talk to it, how you put voice into your back, what the quality of the voice, things like that, you know, lights, you know, that's your love, you know, things. So there's been lots of ways to make the experience around AI superior to what else is going on. Now, this is a way it's a smaller problem than it's going to be. And then it's been close to the end of my story out here. Some things that I think are worth going after is handling a mix of, like, commands and kind of unstructuring by a long, very conversational. So if I tell you, hey, I'm going to turn on the lights. Oh, got that command. I think it's, I think I like it better, you know, if when the sun doesn't set, you know, too early. And it's just season like that. What does that mean to the shade? It doesn't know what this is. But you see, you know, now the shade has to take that across the information to figure out what to do with it. And then you go back to, then maybe it was just like, oh, well, should I always turn on the lights when the sun sets early? You know, they can have that kind of dialogue with you. But right now I'm going to do that as well. So these are like things that, like, between, like, how do you command, understanding commands, how do you command what, and also handling these three-fold speech and then be able to know when to speak up and when not to speak. Understanding non-merc, non-involved fuels, I don't know if that's what we're talking about. Right now it's all built on text. You know, I say something, it's the word to text, machine understands text. I, and it gets text back and then speaks text back to me. But it doesn't understand, well, we are training face, you know, hand gestures. You know, if I start doing this, what is that good or bad? I don't know. The machine's good. In fact, you probably don't know. Oh, my God. It's bad. So, but like, as we understand those things, you know, as a human would, that will broaden the perspective, what AI can do and what it can understand. And also how it can communicate back to us. You know, maybe it does need to be very well-timed. Just blinking its eyes could be something that's meaningful. Also, the need to be approachable. Oh, sorry. Context. This is tricky. We have short-term and long-term numbers. It's either good or bad. Should machines, even with us, should they not? Should they remember everything? That's all probably the matter of the answer. But we need to make some understanding of like, what's important? How many people have used genome? And how many people have every email mark that's important? I don't know if you know I see important feature in genome. Probably don't, because it's everything's important. So, because if machines don't understand what's important, what's not important. And it's so contextual, it's like, it's so personal. So, this is another area to work in. Another one is being proactive. Like, basically all systems today, you have to think of something to say. And that's why it's commander. That's why a lot of them like, and that's why Google Assistant doesn't have a name. It's called Google Assistant for a reason. It's not trying to be your friend or have a relationship or dialogue with me. It wants to be what you can make it. And so you get to think of things to tell it to people. Because we're kind of boring interaction, you know. I would love for it to like, tell me stuff. You know, hey, it's kind of late, you know, you need a bedtime story, help them sleep. It's not a bit tonight, you know, so like that. The last piece is kind of like, it's all a big relationship with users. It's big, but it really kind of gets down to one of my biggest credit pains. You want to use an AI system and it's like, okay, great, now give me access to all your contacts and your emails and give me a big account or credit card number and all those information so I can do everything for you. It's like, whoa, I just met you. Come on. We'll just start with like something simple, like how's the weather, you know. We can work to like my contacts, oh yeah, call my mom or I'll know that like she should bring a jacket, you know. We can work up to that kind of stuff and that's how you really need people to. And I think this is going to be something that changes over time, especially say I get smarter and people get more skeptical of it and you see a lot more barriers and things like that. So that was none of my life to me. This is where we're here. I'll go do these things and then let me know so we can like talk about it. Do you think there's going to be a way of skepticism? Before acceptance, what comes to you as a definition between you and this AI? Yes. And then you think about also in every movie and they make sense, right? New things are scary for a few of them and they will try to resist. There will be some groups that like hate AI. Like AI is terrible. I won't have to deal with it. And I'm going to go live in a world that's pre-AI. And there is examples of that today. There's kind of like I said draw right on the sand nothing after this time. And I think we'll see some things like that develop as an extreme. But I think the most pieces like I don't understand why this interview like a person thinks it. I'm inviting a new person into my house that has a lot of decision making for you. Should I open this door for the stranger that opened it that came by because they're delivering it. Whether it's a pizza guy or just someone knocking the door wearing a hat that said pizza on it. I really trust this AI to make those kind of decisions. Until we start seeing that model that's going to be skepticism at first and then after a while you know some trial and error and we'll kind of like figure it out. Some bad things might happen and then we'll make some rules and we'll kind of act it and then more people accept it and then we'll kind of act it. Cognizant computing? Cognizant computing. I'm actually not I don't know that topic as well. As from a product manager you have an idea if you think AI is going to be a good fit for it what are the steps to trying to figure that out because it's like great question. I noticed this is a product meeting so we're going to definitely talk a little bit about product here but I think it's also important to understand what took me so long to get to this point is what is AI? What's the differences? We'll talk about AI. What are the levels of AI? And also what are the components that go into making an AI system work which I will help. There are a lot of product managers that work in AI that only understand how it works and they make very simple basic AIs that there's nothing wrong with that but again you're going to be in the sea of other AIs so how do you stand out when you stand out when you understand all the things we'll talk about earlier. And then you apply some kind of methodology to that so you've got to find a problem to solve because I just came up with one now with like a gardener in my apartment then use cases of user stories very traditional product management you know cycles here pain point solving of the pain point you know with stories and diagrams and then when you look at those user stories the steps that it takes first I've got to find a place that sells AI and for real what goes well with my company and all this kind of things that's well where can I what are the decisions I have to make and those decisions and again decisions is what makes intelligence intelligence is how you get AI all those decision points then it can be a place to use AI and then once you have the decision points you can understand like oh how complicated is the decision is a simple rule like red light means stop mean light means go and a very simple rule based you know you can do something very simple you don't need a complicated well is it this shade of green I'm actually agreeing with this you know you don't need to do all that you kind of like you can work in this very simple place or do you need something more complex so once you can understand that you know get a team get a team develop a happy path so you start with a happy path I don't call it an EP I call it a happy path what is the most optimal case that you can think of do that first you can have your happy kind of from the happy path you'll meet you'll have a couple things you need to think about I have the decision points that I want to add out of me with AI I know what flow I want to go through what data do I need to support this same thing data from the inputs that can be used like again like my wall is this color so I should probably think of colors in this range you know that's input and then once you have all that added that you think we could inputs you need the trade going back to the more like trade classified extract you need to train a model to then use all those inputs to give a recommendation or make a decision you test that do I like those decisions or I don't like those decisions sometimes it's really clear like is this picture a cat you know I kind of know if that's bitter because we know this is a cat but something more vague like are these colors good with these colors maybe not you know human it's kind of like an eye of a holder so that kind of like test the ability we need to go back to your user base say like hey users do you like what you know what my AI is giving for another you know so maybe go tweak the models so sometimes it's really clear sometimes it's not very clear okay so refining these cases also handling error conditions and not having path flows using context also they're not going to be able to make everybody happy all the time I have to be on the 80-20 of this you know no AI is going to be perfect like no AI is perfect because one AI has human biases so until AI starts writing itself but then that AI has human biases in it so I don't know what that beats you again more for philosophy update your application logic these are like in Alexa skills this is the do I only buy take movie tickets for months using one API and use three APIs and find the best deal for these different things you can do like that to help aid the AI's action part and I figure all this you collect all the samples and see how it's being used you train your models and you test you find training test repeats but where in this list because if you're like retraining and seeing what the result is where would that some sort of product be released or is that still just all internal so I mean there's a definitely like a a standard product you know come up with a problem on design you know build test launch support you know type of flow so I just consider that standard and I want to make this meaning more about the AI part of it like how would I put AI in that design process how would I put AI in that build process I forgot that build how would I put AI in the test process and things like that but we can still stay here but the next five questions like if I was doing this what questions would I ask myself are you allowed to talk about any real world examples of like how we solved and then how you applied this to it well I mean this every month so now you can go back a few years when does anyone use Samsung phones anyone use the S Health app on the S Health app there's it has to build your phone in your pocket use the sensors to determine if you're sitting standing walking going here and there and then that's the basic level and then after I know what those activities I can then make some other predictions about if you do an extra this you may burn extra blockalities or you might make extra these goals so that problem starts with how do how do the sensors know if we're sitting or standing at a place that click well first I had a problem I need to know the phone needs to know what the user's doing what are the examples of that sitting standing walking walking what the center points is in like after I get these pieces of data then I say yep they're sitting if it's at this angle if it's at this angle if it's at this angle for this amount of time that's it they're in a bus so we collect a bunch of data points and we like hold the phones in different places pockets you know get all those data points together I have a model around my data do I have it on there? I have a pattern next to it yeah I have some more long list of like how do you make a model better get some more technical I don't want to get this high level yeah so then same thing then we tested it so I'm like yeah it was also on the sitting you know pretty good on the walking and running so let's go back between the sitting pieces and once we do all that and then take quite about the next ones yeah we'll really get a sitting walking if you're in a bus or in a car things like that now what we do with those is information to make more decisions more decisions about you know should they do once they get at this time at 6 p.m. we should make a prediction should they walk for 30 more minutes should they run should they just something else so you can start using those you can get a new problem in this case there's just a new decision point so we can add that to the flag the apple watch where somebody was having like a stroke and the apple watch sent out an emergency signal to somebody friend or family member so you could program all that stuff different scenarios to notify you in mass to notify you yeah that's quite a new idea but yeah that's exactly the AIS trained to like recognize stroke patterns and then make decisions on that how do you decide it's pretty early on do you have any disciplines in so that you can work through some of those biases or like maybe there's an exercise for the older someone's like when I do this thing it's like it's as much something along the lines but at what point your brain is going to have like brain trust or something like that really is that still going to be something like that yeah and then early models were just with my people so they didn't work with people with dark skin very well so this is where yeah if someone who's either a marketing person or a socialize someone it's like if you're going to do this project what's your demographic someone who knows to ask the right questions oh yeah we can only test it on this yeah that's not that work maybe we should get some more people and then test on this so I think definitely I think those are good times especially at the determine like what you need and also what decisions should be made you know maybe some cultures they don't want the alert to go to to go to a family member they're right I go straight to a doctor and that's how the system is set up in the universal healthcare system whether there's a couple places where at those points I think we do ask we also be for a linguist enough for especially speech we bring a linguist we bring in sociology people a lot of marketing people marketing research studies yeah and again high level your day-to-day can be very different every company can have different ways of like Latin Atlas Samsung were very different about doing these kind of problems cool I left this slide for last I wanted to have I wasn't sure if you guys would ask questions so if I went through everything and asking questions I thought so again do I need AI that's not like before I'm sorry do I need AI yes no maybe do I want it to have much more capabilities is it something that needs to be spoken to or something that happens so I can deal with that health thing I don't even ask it to figure out if I'm going to go walking it does it and it's something to be built into the health application when you say that you have like a cost-benefit analysis what level of AI do I need and based on your definitions up front that makes sense and on the last one I think it was step three or something you went through through your decision points to then figure out what to do where's the cost side of the equation it's how much time you're having your program schedule how much resource dollars you have to spend to implement would then dictate a lot of things that would flow down from there absolutely with every product decision there's a triangle of like time, cost quality or something it's me it's me oh yeah I'm sure that there's a square version I'm like a little version but yeah how much time be available how much money do you have available often say anything that's questions about are there libraries available are there that sort of life and do I have to write my own there's no one graduating in the next two years so then thoughts and processes do I borrow are there consulting companies that do this already so yeah time is one of those things but then I think at the end of the day as a product manager I ask myself what's right for the customer what's right for the solution and product so maybe if I don't have if I can't do it if I only have a six month window can I make the product that should be made in six months if I can they wish to think about doing something else in the meantime they have to see the questions there well I don't know if they will be able to see them but yeah but wait so in terms of product management and I it sounds like a lot of it is there's a lot of it's it's detail oriented understanding is it also a lot of technical knowledge that you need I know you said you don't need a lot like these are things to know about them but in reality actually being a product manager is it more beneficial to have that kind of knowledge or can you actually still contribute in valuable with the product management it it depends and it depends because it depends on the company you're in some company product management is a marketing function it doesn't put the engineering side some it's very deep it's a product of that you have that a lot of technology and other companies are in the middle and I would say yeah the more you understand about not how to do it yourself but why it's important that these decisions exist so again what does speech recognition do oh that's this why is that important to the system it's important because if we don't understand speech nothing else works okay we have now I know my users are places where they have where our speech recognition is historically bad so now we have a decision we have a decision as a product manager like where I'm going to take this so again if you don't know to ask that question about speech recognition and you go out and you're like okay if you can test it only in America and then now we go out to this other place where it's typically historically bad it's going to fail so understanding those components helps you early on that you don't learn close to speech with the what level of AI do I need like one of the problems I've had I come to he builds a lot of rules based systems over the years and we're comfortable with that and we're like can it get better I think there's somewhat of a fear is that we build this whole structure and do all this deep learning and it will just turn out that it can't quite do the tasks or whatever I don't personally think that like how do you I mean is there a risk that you do all this research and you think you build your models and you train it and it's just giving you what we do fitness we do like workout kick custom workout programs for the cardio and stuff and like is there a risk that like goes through this whole process and just comes out that like it's no better than a rule system at best and at worst is just shooting out the junk that we've heard people and then you come back to it what decisions are AI making my hunch again will be you can get improvements from going with a a higher level AI time system again as soon as these decisions aren't super simplistic and with health and fitness especially for creating programs you can't do a lot of things where it will come up with an example but you know what I'm saying so the if you analyze I don't know if they say there's a video it's a video that shows steps like an exercise and then they all users just typically stop 2 seconds and they do something and then you can have you can't do that with rules simply you know you can both of us just check every second and then do this they can learn why a user might be stopping the video here that's something you might do with machine learning or with the mind where you can then extrapolate to a better solution and then you can learn more about the space but I mean as like a product manager like how do you either like A mitigate that risk or B have that risk brought to management up front or have you ever even had it where it's just completely gone like south on you where you thought you could solve a problem with AI maybe it's not that you could it would need more than 6 months a couple things I'll talk about first can you mitigate the legacy AI versus new AI can be challenging when it says the bigger companies to kind of locked in they're saving races that they can kind of build another AI on the side over here and just because they have a resource in the time to do that smaller companies I think what you look at is again how do you up level that's mine but I have some challenges I'm solving the rules now that I've solved those what are the next set of questions that I start solving and maybe I don't need rules for those and then as you start to build on top of that you know sort of wrap it around you can then start taking time to be able to protect some of the rules they will become more apparent too again just attack the same problems then again trade off especially the management if there's time constraints cost constraints I'm doing a very tricky conversation to have after these machine learning algorithms are probabilistic not deterministic meaning real another time sometimes I'm going to get it right sometimes I'm going to get it wrong and that's kind of what I have 80-20 up there sometimes I'm going to get it right sometimes I'm going to get it wrong and that's a difficult thing for some people to accept it and not manage it so now you start by finding a different problem to solve on top of that that's what I have there's something else you said that I wanted to share with you cool continue your thought your line of thought if you look at a legacy system that's rule-based they will look up that are simple and you want to establish some sort of time-demand make it more of a machine learning and predictive algorithm on top of that for the layman how would you equate the scale I mean a rule-based system is a one a machine learning with history and some level of prediction is a four something that actually has some decision-making and then something that has access to other data sources is a seven and how I'm making those up how would you equate those yeah thank you for asking that because that's exactly what I wanted to comment on this it's a big brother too right now there's a lot of excitement and I think people underestimate how much time this actually takes to do a rule again it is first comments they had it around since the 1950s so it's been the same for years but like 20 or 7 years it took us a long time to get this way it's gonna take us a long time to get to like the next levels too I'm not saying there won't be value in the short term but it's gonna take longer than you might think and a lot of it does come down to how much are you borrowing how much are you building internally so in this rule-based system it seems like that's a type of company that's building things or building and that's kind of why you start with rules and you may not have data scientists available to you so if you don't have a data scientist and a guy who has a PhD in machine learning and modeling and some other resources then first you have to hire those people hire them get them up today in these cases give them time to build things they're already looking at 6 to 9 months right there then they can start just because these resources are hard to find so you already got the team in place and now you're improving on an existing system then borrowing and other people's solve problems you can do a lot the rules is always the fastest that's going to get you running in like anywhere of months but you'll quickly find limitations especially in when people go off track they're not having paths have had a Bible but I didn't say I wanted my tickets to be different two different moves at the same time I said no I got a role based system and I don't have a tool at the same time you have to write new rules which takes like another couple of months and then we will play that or you're reaching upfront you invest in a system that can learn how to take those two in place early I was like hey I think you're asking for two things can I verify yes and then with me training he's getting that right that's moves us closer to like the next level like the next big wave and that's where you can like think about it do I want to keep adding these rules every few months for who I invest in the system now that takes me a little bit of time you know here a little bit of time later there and that's where you start to be that trail decided but it takes experience you actually have to do it a couple of times before you've gotten the data points like yeah the same as next month or the same as next dollars and I also beg to ask the question what is the resource impact going from a rules-based system to a machine learning system to a deep learning system this is a lot of processing and a lot of time inherent in each of the steps yeah thinking how much I can share with the person in this case the building a non-world-based system about five resources six months that does most use case most single use cases fairly well again some PhDs you know developers programmers usually moving Python type guys if you're going to do it if it's not going to be mobile use IOS whatever standard people and then you have to work on like I was on stupid breakdowns web service instruction people and then they have to figure out how to work together because normally they kind of don't work in like have so many cross-dependencies and that learning and working together piece also takes a little bit of time to figure out but it's it's possible to use something in about six months so if you get borrow everything and you already have a team of place no team of place you studied also religious study in school and so I'm just curious how your experiences in AI over the years have affected your spiritual life if at all I will say the other way around but then again let me think about it well how is it affecting you now or coming from AI affecting my spiritual life then how if you think about which country was it that game robots this time yeah sorry I remember sorry sorry okay yeah so they gave Sophia right citizenship and I think that citizenship came with more right women or something like that yeah and so should I be only outraged by this I mean is it should it be proud should it be sad you know so that's something I'm thinking about now and then you think we will have to think about a lot more as people want to start leaving in their wills to an AI system that they want to make sure it lives on you know so we give these robots house robots and I love this house robot so much that I want to make sure they stay in the house and get power and someone comes in like you know reboots it when it you know fails and like I leave money to it you know so can you do that is that really me so all these kind of questions but originally how I thought how the whole problem so I'm trying to think how do I make this more like a person and I don't know if you know this but people aren't watchable they are they know we constantly make decisions that are not on the logic base and part of that has to do with like speed of means make decisions and not having all the information and part of it has to do with like this other thing that we don't know how to process love emotions but I really understand why we kind of make decisions we make so I can make a machine make those same decisions and it hits the ethics right along that's my focus so what's right and what's wrong to the machine what's right and what's wrong and who defines right and wrong society instead of like the social contract so that goes back to your context before that you want to go to a remote market for this product you got to figure out which one goes and the biases that are operating to every system that you write yes that's the last slide I don't know do you have time whenever I'm here I guess hey yeah so Elon Musk he's skeptical what he warned he warns about consequences of trusting AI too much he says that Microsoft work doesn't really understand AI enough to ask more optimistic or doesn't see as much danger as Elon Musk sees so if you back on an ethics what's your personal opinion yeah so I'm on the camp that is that doesn't believe a singularity like well let's take a step back somewhere that's kind of a couple things that could happen machines become software meaning that it knows that it's a machine and that's a program that actually she's asked like what do I do now that I know a machine do I keep following the instructions or do I do something else and how does the machine let us know that it is software that it comes to this machine without us saying no you're just a machine that getting to that point I think you could get there in 50 years maybe years but I don't think we get to it's like this machine has free will it's still running on programs it's not running it's not reproducing making another AI that has free will and I think that on this side of yeah I'm not worried about that many times many times before yeah anything now what do we do about the software I recognize that I'm a machine but what does that mean I mean it all recognizes a scenario that's alive but we don't treat them the same and I think that's kind of where I come at I mean just because something knows it's alive hard to say how does that let us know it's alive I don't know it's actually not covered in meaning it's covered in some but not covered in all of it it goes back to who decides right and wrong society cultures forms religions have an impact on society really it's a tool of other philosophy and in my philosophy as an academic study so I think it's just easy and kind of an answer because I'm not really but I don't think I am not a machine I don't think A.I. is as dangerous as we get to be like other things that were dangerous in the past nuclear bombs I mean we could blow the whole world up and we did blow up parts of it and we made some mistakes and we learned from them we overcame so same thing I think with the A.I. we might make some A.I. that allow to make some decisions that cause some really bad things to happen and we'll learn from that and we'll as humanity we'll figure out how to do with that and evolve the new technology that's truly impactful is also dangerous I mean maybe we wait for quite a while after work right you know but we will learn I mean is that to I guess I will see there's two days for example R can you do some one and you can also using the law must be a devil I've written one in his articles it may not be the one you're referring to but he had a pointer of A.I. and I interpreted part of his concern and hasn't it's got it is after that one car drove into the back of the semi tractor because the frankly it hadn't had enough deep learning it didn't understand all the various aspects of context there's a balance between the human and the man machine or person and machine interface that comes down to the fact that you can't depend on A.I. A.I. is not a plethora of solutions for everything and one A.I. doesn't fit everything or one A.I. application doesn't fit everything so long time passing before you get to that scenario where machine learning becomes such that it becomes ancient and then it can create its own code and create its own being because at some point if you carry that philosophical argument the code has its own personality it has its own being and it's totally different from humans we can't we can't even say when it becomes its own being but it defines what a being is in its own context not in our context and I recently shared this thought with someone and kind of changed that fact especially in the room where to that point where say A.I. doesn't model to be like a human becomes aware of an intelligent truly intelligent and this slide doesn't want to be human what does that mean? I've gone off that topic to see there being like laws put in place like um corporate person where it's like the legal version of corporations have rights so could it potentially artificial intelligence then have their own rights? we will get to a point where we will have to have those laws in place it will come much slower than I'm just saying that that neutrality should information be this kind of like speed management piece there's some things in place but that will arrive and change the law and change it back and go back and forth I think laws as a after are consequence of something having ability to use it now I think in the meantime what we'll do is we'll continue to do these things and there will be some deaths new technologies that's going on or dog treatment job loss maybe what I don't foresee happening very soon is that an AI creates a new drug or disease and then it lets it escape from what you want probably not all that thing's happening because the AIs that we will build won't be this super singular thing being a collection of AIs a collection of AIs though it's sole job is to make sure and then now they have to balance each other so that unless they're both aware and they can have a conversation they can rather say no we should come to my side and agreement and let this disease escape now if they have to both happen at the same time they both have to come somewhere and try to disrupt some period of time otherwise one AI software can impact you to a good point I'm sorry this is really the way that it will be I think it truly will be in everything it will be those laws that you just ask me the consequences of them but I don't think we'll have this drastic maybe not half the world like you know you have some people might get hurt sorry sorry I'm such a hopefully it's not you hopefully it's not me just two more questions okay anyone new that wants to jump in I was just going to bounce off that what was it like on Facebook there was machines that started creating their own language or something and started talking to each other then no one knew what they were I didn't read about that and yeah but then again if you think about their program to be efficient communication and zero ones is nothing most efficient way are they really aware or are they just following a program and come up with something that we didn't think of and then that's good they can do that and then we learn decipher it or not decipher it and just let it be I think that's okay how do you say from a product standpoint Elon Musk he's a little too aggressive he's trying to design his own AI he's pushing the automation autonomous feature too fast so my biggest fear is people pushing out their features too fast and not fully tested right measure twice cut once so I think we have that concern because it's a human-making decision when do we release this AI which can have like you said as a product name that's the biggest thing do you know the Hippocratic oath do you know harm we don't have that we don't have that we do well there is no it's doing what's right for the customer it's doing what's right for the customer we also have to put out something that this is my hand if you can afford to know me that's something that this is what you learn from a night stand so I think congrats on the Apple they have a family come do that thank you for the presentation I'm taking about product management and I don't know the thing about the AI but we are developing MVP in Middle East for lonely people it's a box for lonely people the people who feel lonely and for the MVP we put 50 people 50 psychotrackies to I mean the real people behind the machine to find 50 end users yes just because we want to validate the market if we want real world viral lots of people mention this is an amazing AI beyond and decent we stop the testing and I want to know if we have a database about 100,000 chat log of the people who say I'm acting lonely and psychotracky recommend some things and we have this database what are these and techniques that I can use in AI to really make this spot is it possible we validate the need and we find that there are lots of lonely people that they like and the traction is very hard and then yeah yeah hey and I think this is the last one the so quickly what you have to do is like take all those synthesis and structure the classifier's analysis this means this and then when you see this do this or these key words and these structure mean this and you have to train as a model to then look at synthesis orders conversations so that it can learn how to then follow up upon a user follow up on a query that is in that corpus I don't know once you've trained the model then you figure out the appropriate actions and the same thing with like the people chatting you kind of have to give the machine rules maybe is it 50 people on the machine and they gave the same answer and then you update it you know all you update them with more answers and they also are giving the answers but then they fall different paths and then the next you update them with other updates and then most of the people have more answers and that's how in a more visual way think of explaining the training and retraining process and you have to take that data and specify it the data set is good 100,000 chat reports it's not a it's a starting point it really depends on how broad you can go but I think it's definitely a starting point and it's a starting point it's definitely a starting point