 This is TechSoupConnect. My name is Jerome Scriptunis, and I run the chapter called TechSoupConnect for Time Banking Organizations. Today's topic is Everyday AI for Everybody. Our guest speaker is Ken Mahalak. He's a computer science lecturer at Ocean County College and he's been working in collaboration with Intel to develop a curriculum around artificial intelligence. And we're very fortunate that he is sharing some of his first day off in a very long time, I think, to help us understand and appreciate from the perspective of people who are largely not technology developers, but technology consumers. And we need to be smart, mindful and aware, especially in the spectrum of our work where we're working with individuals from all walks of life, all stages and ages of human development. And we want to be able to communicate, appreciate and understand. And that's one of the goals for this morning's, let's call it information sharing on increasing our awareness of how artificial intelligence surrounds us. And it's, say, maybe the weather or it's in our environment and it's to our benefit to be mindful of that and know how to work with it. And to know it when we see it, although it is a big field. Ken, I've known Ken for several years, one of the nicest people to communicate with because Ken has a deep appreciation and understanding for how people are affected by technology and an extraordinary patience for teaching technology. And so he's a wonderful person to have as a guest speaker. Ken, thank you very much for giving us your time and the benefit of the tremendous amount of work you've put in recently on AI. So tell us a little bit about your life and help us get excited or your life with AI. Oh, okay, my life, I was like, I'll deep you wanna go. No, let's keep it. I'm a Gemini. And then get us excited about how we could be just a little bit wiser and smarter with technology. Sure. Yeah, by the way, thanks for that wonderful introduction. Everything Jerome said was flattery to 100% level. But anyways, yeah, let me just spend a few minutes with you guys, just talking about AI, give you a feel for it and maybe where it's headed. I'll take questions at any time. Okay. And rather than you guys typing in, I got some things I can show you. We just wouldn't get my mouse here, okay? So do this. Go to nearpod.com, all right? So open up a browser and just type nearpod.com, okay? And this way I can push out information to y'all that are long and stuff. You just gotta click on the button. And then we- Yeah, put nearpod.com in the- Oh, in the chat, awesome. In the chat. Okay. You may have to resize your screen if you're working with one screen or just what's the, don't expand. I'm gonna see if I can do, exit full screen. So that's what I did on my display screen. I have a laptop and I exited full screen mode and that opens up a little space so that if you click on nearpod.com and then enter L-Y-X-I-E, do I have that right? Yep. Yeah, I should be showing it on my screen. You are, you are showing it on your screen. So Cora, if you could please try that, Sheena, Audrey, Valerie, Ken, if y'all could give that a try. And I'm actually, while we're doing that, I'm gonna try it on my phone and see if it works, see what happens. So I got nearpod coming up on my phone. And then when I go there, I click what join a lesson, is that what it is? I think I said, yeah. Join a lesson and then you put your name in. Y-X-I-E. And then I click join. So it's slightly different on the phone, okay. And then I could put in my name and my nickname, okay. Okay, I'm putting that in and I'm in, okay. And there's also an app, there's a nearpod app, which you can also try as you like, but we got enough going on. I don't want to flood myself. Yes. Okay, Ken, back to you. Okay, no, it sounds good. So if everybody is on nearpod, it's just easier. I can push some information out to you. Okay. It's just easier. Okay, yeah, this is what I'm gonna talk to you about, but it'll be brief. I sent you a roam like over honors. Yeah, so it's probably more than obviously we can cover today, but I just want to give you a feeling for we're, AI is everywhere already now. You use it every day. You may not think about it, but you use it every day. If you do any type of talking with Alexa or Siri or face facial recognition to unlock your phone, or if you do any texting and words coming up for you to help you predictive texting or autocomplete, that's all AI. All right. So yeah, so let me ask a question that does AI mean it's predictive, it's forecasting, it's a decision like a guided decision making capability? It can be. There's several domains in AI. I was gonna cover them and I can talk about machine learning and neural network stuff like that, but... Okay, go ahead. So I was gonna talk about what is AI? I was gonna briefly talk about the domains and talk about some of the technology behind AI. By the way, it's all mathematics. If you're interested in this field, the things you wanna learn is statistics with a little bit of calculus. And of course, if you really wanna get into it, the Python programming, which is a must. Data analysis skills are good too. If you really wanna get into it, but there's a lot of no code platforms. In fact, I just finished an introduction AI course where we use a no code platform that's like drag and drop. And there's a lot of platforms out that Google has, Amazon, Microsoft and all that where you don't really need the code. You can upload your data and do stuff. So... Yeah, I think what, as we see on the screen, you're sharing that we wanna arrive at is some guidance from you can, towards the end, on ethics in AI. Oh, the ethics, yep. Can we trust it? Should we be uncomfortable about anything? Well, let me be blunt. Can you trust it? It really depends on that it's all about the data. In fact, AI is all about the data you use to train these models. And if you're using bias data, you're gonna get bias results. Like, I definitely will talk about that. So let me just move forward here, if I can. There we go. Okay, so a gentleman, if you're really into AI, a gentleman I like to follow is Andrew A. I mean, his credentials speak for itself, but he's good. He teaches at Stanford. He started his own company. That actually advises a lot of other companies. So if you wanna look at practical AI type applications and you wanna see what's really going on in industry, he is an excellent gentleman to start with. But one of his, he's got a bunch of courses on Coursera. That kinda kicked off the, at least helped start the revolution. In fact, he was a co-founder of Coursera if you ever used that training platform. He had a machine learning course, which was a big hit. But he has this AI for everyone course that anybody can take, really. And this is a slide. I borrow from him that says AI is the new electricity and it's gonna transform businesses. And he's right. It's already doing it. Applications in healthcare, X-ray scanning, MRI scan, agriculture, John Deere is a big AI proponent. They bought several companies and they're designing equipment that talk about self-driving vehicles, self-driving farm equipment. All right, it's a little bit easier to model that. So it's already transforming these industries but he's a good gentleman to follow. And he, I use this slide because he compares deep learning to superpower. So we'll talk about that. Like I said, he has some really good courses. The AI for everyone course is awesome. Okay, so that's my plug for Andrew and me. Let me just go to this. So like I said, you're already using it every day. If you talk to Siri, if you got Google, if you got Alexa at home and you say, hey Alexa, I haven't Alexa, but I call it, I named it something different. I use it pretty much only to play music, but I do have friends that use it to control their lights and all that stuff. But when you talk to those things, behind the scenes is a neural network that is recognizing your speech and translating into actions. So if you're using that, if you're using your face to unlock your iPhone or any other thing, that's called the convolutional neural network. That's actually artificial intelligence. Anything that will auto capture photos, if you go to Facebook and it identifies you, that is artificial intelligence. In fact, Facebook, the big tech companies, Google, Facebook, Amazon, Microsoft, they are the investment in a, it's incredible. And Facebook actually is a big investor just because you think of all the post-state deal with, and I forget how many post-state deal with a minute, but they have to analyze that for offensive text or misleading videos and stuff like that. So they put a lot of research into it here. Assisting you, like I said, when you text, if you're doing predictive text, if you're in Gmail, they have a feature where it finishes your sentences for you and stuff like that. That's all artificial intelligence. Behind the scene, Google uses it. Obviously everybody's familiar with autonomous vehicles being tested by Uber and Waymo and even Tesla, right? All these safety assist features. And more is coming. And after a while, there's this funny quote that says, AI, after a while, you don't even realize you're using it, you just accept it as part of your daily life. And this kind of frustrates AI researchers because everybody's expecting these talking robots and all that stuff. And they're like, yeah, but what about all this great stuff we did with your facial recognition is good, but when are you gonna give me that robot? So anyways, you're using this every day and Jerome alluded to that New York Times article. That's nice, it's maybe a two-page article that goes into a little bit more details how you're using this in your daily life. So if you think it's not here, you're wrong, it's here, you're using it, all right? And you know what, it's just gonna be adopted more and more by companies, right? During COVID, those are very challenging times. They still are. Companies in some cases weren't able to find people so they had to adapt their businesses. So COVID has also accelerated the adoption and the research intake. We can just see some of the statistics that comes from a Harvard Business Report. 55% of companies says they accelerated their strategy, almost 90% is saying it's becoming a mainstream technology. So it's here and if you're interested in this type of field, it's a great career, great future for anybody, students and existing industry professionals. This is another thing, yeah, go ahead, Joel. Do you think it will get to the point where there's going to be a warning label on applications informing the consumer that this application is using AI algorithms based on data sets collected from such and such. Is that, as you said, when you get to the point maybe on the ethics of it depends on the data set and the data training. Sure, sure. So that depends on I think legislation. I think companies will not offer that information up but recently the European Union is very active in making sure citizens know that AI is being used and New York City has just recently passed some laws that a lot of times AI is used for resume screeners. They make people aware that we are using an AI application to look at your resume and you have the right to have a human intervene. Big push on AI, there's a couple of issues with AI besides the data. One is always having humans in the loop to make sure you don't have an AI that goes slightly rogue and does things that you don't intend it to do and there's great examples of that. One is Amazon built a hiring application and they trained it with the resumes of existing engineers. Who are the engineers? Largely it's a male population. Largely it's white male and Asian. So anyways, what did this Amazon hiring AI based assistant learn how to do it? Learned how to discriminate against women. So it found a way when a woman submitted a resume it detected if it was a female by looking at just descriptions or primarily or predominantly women's colleges and it excluded those resumes. It's kind of garbage in garbage out. You put data in, it's not really learning anything like we learn things. If you feed it discrimination and data you're gonna see that as a result. So you gotta be careful. Yeah, so Ken then are the data sets mostly historical data sets or I thought that artificial intelligence was a learning. It was gathering real time data. It can, it depends on the application you're building. Some can be based on historical data. Like maybe if you're training something to recognize breast cancer or something like that. We could look at, there's a very popular data set from the University of Wisconsin. You can train based on that. But so you might have to upgrade it to include recent data. That's something you have to deal with when you talk about deploying. How about like with that article from the New York times about how photographs on my phone are organized or put together in a story or the facial recognition. There's no data set on that that's historical. It's built as I'm using the phone, is that correct? Yeah, but along the way, typically what has happened behind the scenes is Google or Apple has trained that neural network to recognize certain things, certain patterns with millions of images actually. And they can recognize dogs, cats, people, faces, so there's a lot of training that has to be done. Anyways, this slide just says the same thing. Businesses are adopting it and there's a changing landscape. This is the world economic forum that says, hey, in 2025 AI is gonna have such an impact. Some of these jobs are gonna disappear. So there's another impact to society where it's like, hey, we won't need maybe accounting bookkeepers, things like that. We'll be able to actually build AI applications so I can do a lot of this routine type work. Not necessarily take the place of all the type of work that it's counting on, but accounting does, but maybe some of them mundane items. So Ken, as you're giving us these examples and going through the slides, if there's something that comes to mind, have you heard any stories about how AI helps people in the community? For example, if someone is homeless or if there's human trafficking or if, I don't know, you have the, what I did just recent article about Beagles and airports that sniff out contraband or something like that. Does AI have a way to sense information other than through like numerical data? Can it pick up like vibrations, smells or is that still in the future? That type of smells and vibrations? That's definitely possible, but there's a lot of practical AI applications that have assisted. They use a lot of AI in pharmaceuticals already. They're being used to come up with new drugs to treat diseases. They're using AI in particle physics to uncover new particles, recognize new planets, things like that. Largely what goes on there is there's a few forms of AI. You can train and primarily 95% of all AI applications today are what's called supervised learning-based applications, which means you're training them, okay? Yeah, it actually can sense facial emotions too. China has an application that they recently put out that does that type of stuff, senses when you're happy, when you're sad, when a class is not paying attention, things like that. Oh, that's interesting. No, that's very interesting that if you lose engagement that would be tremendously valuable in human services. Sure, but there's another form of AI called unsupervised learning where you got a ton of data and you just don't know what's going on there. So you'll get that data, you'll throw it into an AI-based application that will find clusters of information and be able to relate. I didn't know that when somebody goes in on Friday night to buy beer, they also buy diapers. That's a famous case where I forget if it was Walgreens or Walmart kind of put that together. Now that is unsupervised learning still, that's a holy grail. If you can figure out how to find that type of stuff, you're on your way to general intelligence. But let me just mention this, this is really, Intel's really behind all this and they're working aggressively with community colleges and vocational schools to bring these skills and this knowledge out to a sector. Traditionally AI was offered in junior or senior master type courses in college. And so now they're bringing this down to community colleges. A lot of the materials I use and a lot of the information I got is really due to Intel and some of the courses and information they sent me. But my school, I teach at Ocean County College. We were one of the early adopters of Intel's program and we've rolled out a degree in courses and there's more information. If you're curious, they have a program called the workforce AI workforce program. I'm gonna push that out to you now. So if you see that, if you click on your near pod you'll get more information about their AI for workforce program. And it shows the schools there. So you can take a look at that and if, and by the way, that's worldwide. And then over here, I'm just pushing out our artificial intelligence page for Ocean County College just to get a little marketing. So that's it. That's like plugging when you go on your talk shows and you plug stuff. Okay. So with that, buy my new book, buy my new CD. That's fine. See my new movie. Okay. So the AI for workforce information, is that something that might be interesting for like high school students or if someone wants to dabble around a little bit? Yeah. In fact, Intel originally launched this with high schools in Singapore from what I understand. And in fact, the class I finished up teaching on Tuesday, I had nine students, eight of them were from local high schools in Ocean County College. And the intro to AI, I go into a little bit of math, but the whole goal is to not be math heavy, expose people to the various areas. And if they're interested, they can proceed with the curriculum. But like those courses are meant for anybody. So in the fall, I have some business majors that are taking your AI course. It's really meant for anybody. Yeah. All right, let's continue. Yeah. So I always ask class, what do you think AI is? What is the definition? And John McCarthy, who is viewed as the father of AI, he did a lot of things too, time sharing, invented the list language, but he defined it as the science and engineering of making intelligence. So in other words, can we get machines to operate with the same level of intelligence that we have? And if you look into history, it is funny prior to everyone joining, I showed a video of a famous scientist called Claude Shannon that was being interviewed in 1961. And they expected they would have fully functioning robots like you see in movies by early 1970s. And we still don't have it. And because it's incredibly hard thing to do to replicate human intelligence. In fact, they don't even know how human intelligence works to some extent, to a large extent actually. And some people are predicting maybe this is still a couple hundred years off from this type of technology, but who knows? But that's John McCarthy's definition. This is great. I just put this slide in here because this chief decision scientist at Google, Cassie Kuzrakoff, she has a making friends with machine learning, but she is very good, succinctly lays out a lot of information on various machine learning algorithms, things like that, AI what it is. But she always says, AI succeeds a very complicated test that programmers can't write instructions. And that's it. A lot of times, if you come from a software development background, you're given a set of rules, you write up some software and it does it. And AI, if you think of how we think, you can't easily write a lot of rules. Maybe you can write a thousand rules and stuff like that, but there's definitely, we're just more sophisticated by that. We understand context and we know you can't. For example is, it takes about 15 hours for a person to learn how to drive a car, okay? And famous scientist, a gentleman named Jan Lacoon says, to basically teach a AV, autonomous vehicle, not to drive off a cliff, would take 10,000 training exams before it figures out, okay, I can't drive off a cliff. To us, it's obvious. These things that they're smart, you have to train them and they're just nowhere at the level of intelligence. So the things on movies and the things you read from Jan Lacoon is, they're a little bit, I think, extreme in terms of AI is gonna destroy us. Maybe at some point, but we got a few years. So what AI do we have today? We really have what's called weak AI. These applications are very brittle. They'll do one thing and one thing. Okay, that's it. General AI, if you guys saw the Iron Man movies, Jarvis and stuff like that, years away, 2050 to 2075 is probably an optimistic guess. Some people think it's 200, 300 years. Some people think we'll never achieve that type of intelligence, all right? By the way, there's a link there. It's a great book called Architects of Intelligence where you can read about all the pioneers. They interview the pioneers and the leading experts in the field. They ask a bunch of questions and you get a lot of interesting opinions from them. These are the level of AI body. We're in the first one, narrow. We train it to solve one problem. That's it, very brittle. The second one would be equivalent to Jarvis where we could sit down and we could have a conversation with a AI-based application. That's called general intelligence. And then there's super intelligence, which this is the point where these machines are much smarter than us and maybe they're making the decisions about what we can do. Maybe they're managing us, I don't know, but that's way off in the future. I like, go ahead, I'm sorry, is there a question? Yeah, one came in from Cora asking about AI chatbots. Would those be stage one or stage two? Yeah, big time stage one. Chatbots are very brittle. In fact, OCC has a chatbot called Reggie. It was funny in class. I had the kids playing with it, asking if it was sentient and stuff like that. And the minute I finished class, I had an email from the manager who's responsible for that, asking me, hey, did you have the class? That's a round of it, but yeah, chatbots will, yeah, they're getting better, but they'll only be, they're really only developed to answer a limited set of questions. If you ever spend time with a support chatbot, it doesn't take you long to realize, I gotta talk to real you, Bob. We've got about 15 minutes left, and I know we're gonna talk about some ethics, and I wanna make sure we get into the rock, paper, scissors. Oh yeah, I'll go through this pretty quick, don't worry about it. Okay, okay. So I do like to talk a little bit about the history. People think it's happened overnight. They've been working on this really since the 40s. It's been here a long time. Neural networks was first proposed in 1943, so it goes back on. And the whole goal of neural, or the whole basis of neural networks is something called the perceptron, which is meant to mimic a neuron, but it's a mathematical modeling. It's basically you get a bunch of inputs, you sum it up, you pass it through some type of mathematical activation, and if you hook a whole bunch of these things together and you adjust the weights and train it, you can get it to kind of function like a human brain only in one particular example. Like, okay, tell me if this is a cat or a dog. Won't be able to do much beyond. So it's been around a long time, and there's been what's called these AI winters where you get this, every other article is about AI, you get this hype, and then they realize we really can't do all the things we talked about, and so funding dries up. So there's been a couple of that, and for all we may be through a hype stage right now, although Andrew Aang doesn't think so, and with some of the advances in deep learning, most people think not, that this growth will continue. All right, and what called the resurgence? Well, few things, massive amounts of data, the data you are now generating with your phone and your internet, your purchases on Amazon and stuff like that, it's unbelievable, you guys are predictable, you're generating patterns, and from those patterns, I can start predicting what you're gonna do. The second thing is, let's face it, our CPUs, GPUs, we've just gone to the point where compute power has increased amazingly, and finally the research on deep learning has paid off with the data, because you need millions of data samples to train this, and you need GPUs, graphic processing unit to really crunch, because it's just a bunch of math that you're going out behind the scenes, right? 2012, this famous researcher just killed what's called the ImageNet competition, just decimated with a neural network, that sort of caused the resurgence and caused industry and the academia to say, wait a minute, maybe there's some stuff here. So that kind of caused, I'm gonna skip some of this and just go right to, how is AI use today? And I talked about that, computer vision, right? Natural language processing, chat box. I mean, somebody mentioned chat box, a lot of times what they'll do is they'll do what's called sentiment analysis, where they'll look at reviews and like Yelp or Google reviews, where they'll tell you this restaurant's good to eat, not good to eat. Machine translation, Google translates get pretty good. Speech, we had our smart assistants, when you start seeing this in robotics with smart driving cars or John Deere farm equipment that is recognized disease crops and treating it, right? So you're seeing this used all over the place. Can I jump in with the question? I saw something, I think it was on 60 minutes about how video technology and computing can represent someone doing something in a scene, but it's not actually that person, but it looks so real. And that's used, I don't know, in advertising or something. That's, yeah, deep fake. Deep fake, yeah, that's a branch. I don't go into that much. That's a branch called general adversarial networks, but yeah, that's a concern with generating fake video and posting it on some social media platforms and all that stuff. But yeah, that's an ethical area that kind of spot deep fakes things. Yep. But I personally don't own that much about that technology. Yeah, so I can't comment too much of it. So the quick domains and we'll get to demos, right? These are really the big domains, machine learning, neural networks, which, if you say neural networks now means deep learning, what does deep learning mean? It just means you have a bunch of layers that kind of are set up to mimic the brain. And then of course, computer vision and natural language processing is you interacting with your phone with Surrey or Alexa or doing things like that, okay? So machine learning looks funny. I always say, if you like the game, you're perfect for AI because this whole field started largely because Arthur Samuel liked to play checkers and he developed the program to do that. And there's a picture of him, but he coined the term machine learning. And what machine learning is, in the old days we had input data, we were right to program. And that was traditional and you get output. Now what goes on is you've got data, you got input and outcomes, the program from a machine learning application. Now what is the program? The program in a majority cases is gonna be some type of mathematical formula or form. That's what's going on. It's looking at patterns and then coming up with a formula that you can use to classify or make a prediction on that, all right? I'm not gonna show, but there's a two minute video on machine learning that if Jerome distributes the slides, it's really good and succinct and gets to really the crux of it really quickly, all right? Let me jump up to that slide. I'll put it in and could you, which page was that on? I'll call you after this and I'll tell you once. Okay, all right, all right. So you guys gotta remember this is all about data. I spend like a good couple lessons in class talking about data. It is important when you talk about working on AI applications and all that stuff, so little time is spent on the modeling. Most of the time is spent on getting the data, storing the data, getting rid of noise, analyzing it. That's 80 to 85% and it's all about the data because you train these things and if you got bad data, you're gonna get bad results. All right, here's our first example. Okay, Cora and I found the video, the two minute video. Oh, good, okay, awesome. Okay, so I just pushed that rock, paper, scissors and I can, this is a fun machine learning application. So if you click that link, it's a game that was developed by Affinity and what this game is doing is it's looking at your patterns. Now, you're gonna have to play this like a thousand times, maybe more, but after a while, so I always just scroll down on this main page and I know thanks play with mouse, or you can let it use your camera, but I don't do that. So I just say no thanks and when you do that, it'll bring up round one. You're gonna play the machine where you pick, that's paper, scissors, rock. And if you just click on that, you can see, okay, I did rock, he did scissors, I won. But if I keep on repeating and I always tell my students in class, cause we don't have a lot of time, just repeat your sequence, all right? Just keep on doing that. And you'll see after a while, you can't learn. You can't beat this thing, all right? And the thing that's going on by, so you guys got kind of a fun playing that, what's going on behind the scenes is, humans are very predictable. Even though you think you're being random unless you really go out of your way, you're gonna have a certain pattern. And after thousands of examples, and it's not five or 10 or 20, right? After thousands of examples, maybe millions, I'm gonna be able to predict what exactly you're gonna do probably within the 80 to 90% range. It's all about probability at this point. And that's exactly what's going on with machine learning, right? With machine learning, what's going on is there's a family of models. I talked about supervised learning. That is the primary application these days. There's unsupervised learning and there's reinforcement learning. And this is what you're doing. You guys are generating data. And when you're looking at predictions, it's just going down into a regression problem or a classification problem. Do you have cancer? Do you not have cancer? Regression. All right, I'm gonna sell my house. What do I think I'm gonna get for the sale of that house? Or clustering, right? Clustering is unsupervised. That's where you don't know what's going on with your data. You throw it into an AI application and there's a few algorithms that look at this type of stuff. And it equates. It says, hey, look, I saw that people when they go into a store on Sundays. They're buying milk, but they're buying this thing. So let's put it together. Those are patterns that there's just too much data for humans to analyze. So you rely on machine learning, right? And there's a lot of them. Here's all the machine learning algorithms. And I don't cover all of them in my course, but I cover a few of them. Just some of the more common just to give students a background. We look at really a no-coat do something like nearest neighbor, which is very simple algorithm. When you classify a point, what were my neighbors classify? Some of these things are pretty simple. Some of these things are a little bit more complex. All right, hopefully you're having fun with that game. Neural networks, it's really based on your brain, how they think it functions. They think it boils down to the connections between the neurons where the information is stored. And by the way, we have 86 billion neurons. And in our neurons, in our brain is all that knowledge that we spend a lifetime learning. And humans are incredibly adaptable. We learn pretty quickly. We learn pretty quickly that, hey, that dog may bite, where it might take 10 million data rows to train a neural network so it could walk away and figure out this dog will bite versus it. So humans are just not at that level yet. This is a neural network, by the way. It's just a bunch of computer-based neurons called perceptrons. And you typically have layers. You have an input layer where you feed input. That would be data. You've got these hidden layers that are meant to represent our brain. And if you have more than a couple hidden layers, that's what's called a deep network. And then you have output, which we'll tell you is a dog, is this not a dog? Is this bone broken? Is it not broken? So here's an example of a deep neural network. Usually when the hidden layers get above three, that's cool. And these are really classic applications. When you do speech recognition, facial recognition, when you're talking with chatbots, recommendation systems, in not all cases, but when Amazon's recommending things to you based on previous purchases or things you purchased, those are all neural networks behind it. There's a fun game I like to use that I'm gonna push out to you from Google. And let me push this out to you guys. Okay, it's, you can doodle. You can say, let's draw. And it'll tell you to draw a cake. And if you got your voice turned on, okay, I got it. You can start drawing. It'll tell you, I think you're drawing. I see pond. Yeah, so it's fine. Or pillow. Or hockey puck. Let's see if I get cake. Or birthday cake. Yeah, here we go, birthday cake. Yeah. See, so it's kind of, it's a neural network behind it. And actually when you do this type of stuff, for all these things, you're actually helping to train this thing. For Google. Sorry, I couldn't guess it. So that's a fun game that I like to do in class. Okay. Can you flip that slide for just a few seconds that showed the brain is 60% fat? Like I went over that. Is there you coming up with that? Oh, I actually got rid of that. Yeah. Yeah, I got rid of that. All right. So you told me I was a limited time schedule, but in my case, my brain is more than 60% fat. Okay, this is good. I'll do computer vision. Look, we see this, right? We see faces on two children. They look like they're male. They're smiling. They got nice greased back here. Looks like they're wearing the same shirt. Computer see this. They just see a bunch of numbers. We see three dogs running. I don't know what kind of type of dog breed that is, but right? Computer see numbers. All right. So what you do is you have something called a convolutional network and you define layers that basically flatten the image, meaning they take away color because analyzing black and white versus color is a lot easier from a computational perspective. You have these layers, convolutions that recognizes circles. And then you get down to recognizing faces and on the output will be this is a person, this is a dog, things like that. In fact, here's a good example that shows a picture of, I actually think one of the deep learning researchers' wife and you can see the various layers, the first hidden layer, the second hidden layer, right? What these layers are recognizing. And then at the end, it predicts a car, a person or an animal. That's how these things work. The issue with neural networks though, and there's research being put into this is it's not explainable. And so people want explainable AI. When I go to my doctor's office and the doctor says, you have cancer. And I go, how do you think that? He goes, I couldn't tell. I sent it to a neural network and it said it, but I don't know why it said that, but just a bunch of computations and it said you had cancer. There's a big push towards explainable AI. And there's some algorithms like decision trees. You can see how a computer arrived at that conclusion. Neural networks aren't there. So there's some controversy there. I'm gonna skip Dolly. People I've heard about Dolly before. If you haven't, it's a Google application. I'll push it out to you. That can, if you type in, I'm gonna skip some of this stuff. Yeah, there you go. So if you type in a description, it's got a huge, it's been trained with millions of images. If you type in a description, it will try to generate that picture for you. And there was just a recent article where somebody typed in the last selfie and it generated these apocalypse-type pictures where it showed a person in like a nuclear bomb line. Hold on, that was a sad article. But anyways, Dolly, earlier I was messing around it. And I asked to do Hulk getting a pedicure. And you can see the Hulk got his feet up there. They're not green though. This is probably a better picture of Hulk's feet getting a pedicure. And there's Hulk. Looks like Hulk's actually doing something there. I'm a big fan of Hulk. Ken? This is fascinating. And can you spend one minute or so and then on ethics? Yeah, yeah. Was that the machine net? Is that one the moral machine or? Moral machine is, yeah, it's fascinating, but I'm just gonna quickly, what will be the impact society? There are gonna be job impacts. Is universal income referred about universal income? What's gonna happen with all this stuff? I recommend if you have Netflix. There's a great documentary called Coded Bias that just shows some of the problems with this stuff. I talked about Amazon discriminating against hiring women engineers. Jay Balamwani, a researcher at MIT, she was using a facial recognition application and it didn't recognize her face because she was African-American. And so why? It's because these facial recognition applications were trained with the faces of white males. So it's... On Twitter, like Google Photos recognized they were black. I barely recognized their faces of monkey and like, really? That was... Yes, yep, that was another thing. Morified at the same time. I'm sorry, I missed that question. Was that a question? Laughing, but they were also like, or but like, that came out. Yep. But I'll say this is a good, if you have Netflix or if you don't get to roam the host a party and show this because it's... You gotta watch out for that. You gotta watch out for adversarial attacks. You can change a pixel on a stop sign and guess what, that vehicle's flying right through. So, you know, and just some top AI application failures. Microsoft put a chat bot, this was 2016. They put a chat bot out there to learn from Twitter. And with a matter of, I think 12 hours, this thing was uttering the most racist, sexist slurs there were because it was learning from Twitter, which... Probably not a good place to learn. There's a lot of failures. You're only as good as the data. What's nice is there is an ethics canvas when you're building any applications, six areas, six principles that you could consider. Is it explainable as one of them? Is there bias? Is this including everybody? So, this is something we stress when we teach the course. Moral machines is a great exercise. And I'm gonna push that out. Let me just make sure. I think I have the link on that as we'll put this in as the last. I just put the link in. I'm putting them in the chat. Sure. And so we'll get a copy of the chat that can be sent out to the people who registered along with the slides. I'm just going to put that in one more time. Sure. And... But this is an interesting thing to play if you haven't seen it. And this is where you're a self-driving vehicle and you're putting these scenarios and you're gonna crash. Do you kill, is this a little picture? Do I kill the five cats that are, and they may be walking across the street illegally. Obviously cats don't know what to stop. Yes or no. Or do I swear to avoid the cats and do I take out, it looks like business people walk across the street. So there's no one scenarios, but at the end you get a rating as to how did you do compared to other people in terms of they were used over elder people, things like that. I'll finish real quick. So there's a big push towards explainable. Again, if you think of yourself as a patient, you wouldn't want somebody coming in and giving you a diagnosis and not being able to explain it. So that's a current very active area of research for AI engineers and researchers. The failure rate is high on these things. So you gotta be very patient. You see it's 85%. You just gotta be realistic about this. There's all these predictions about the future. Some of the things I talked about, hey, when do you think AI could take over and be a surgeon, for example? And you can see over there, it looks like it's about years from 2016, maybe 40 years in the future. All right. And you see this quote, researchers think there's a 50% chance AI outperforming humans in all tests at 45 years. That might be a little bit optimistic. I always like to end with a little comic that says, are you concerned about the increase in artificial intelligence? No, but I'm concerned about the decrease in real intelligence. So that's funny. I like to share with the class. And again, thank you. We do have a program at Ocean County College, really with sponsorship and Intel as a partner. When we go into this a lot more detail, but thank you for your attention and your participation and any questions. We're gonna let people put them in the chat. This is fascinating. Every single part of it was really fascinating. And I wanna thank you, Ken, for your time. And for the tremendous effort that you put into the slides that you shared with us. I know there are things you used in your class, but this is just extraordinary. And some others who were not able to join today, they will get a copy as well. And this is fascinating. It's breaking new ground for our TechSoup Connect topics. And I think it's gonna stimulate some ideas, particularly maybe having a group session where we watch, what is it? No coded bias. Coded bias, yeah. Coded bias. Yeah, that's good. That's a good idea. And just being aware of some of these articles, the games, the moral machine, to help us examine how we're approaching things in our work and learning about AI-based tools that might help us out. Ken, thank you. You're welcome.