 Hello, everyone. Welcome. Okay. Let me tell you a little about Sergio. For some of you, you know, he was here last year, so he was one of our most popular speakers. Since then, he's been promoted since we met him last year. Yay. In addition to his associate professor duties at Champlain College, he is now the Chief Learning Officer. He supervises all online academic departments, supports strategic goals, including new remote learning opportunities, program growth, and enrollment management. He holds a PhD in information systems and technologies in the field of cybersecurity. His experience includes academia, industry, and public service. And his research interests include informational security, intelligence, and information warfare. He was a researcher in several projects including a project, multinational cyber defense education and training project. Oh, promoted by the Ministry of Defense of Portugal. And he was an academic member of the NATO multinational cyber defense education and training project. He is a member of the editorial committee of the International Journey of Electronic Security and Digital Forensics, participates in the scientific committee of several international conferences and as a regular reviewer for multiple scientific journals. His publications have close to 750 citations. He lives in South Burlington with his wife and daughter. When he's not working, he enjoys wood carving, following the investment markets reading, and assembling jigsaw puzzles. Please welcome again Sergio. Thank you. It's a pleasure to be here with you again one year later. Those of you that will know me know that I don't read when I'm doing presentations. But this time around, I'm going to read just a one-pager and you'll soon understand why. Good afternoon, everyone. I'm delighted to be here today to discuss the fascinating and rapidly evolving field of artificial intelligence. AI has captured the public's imagination like few other technologies before it. Some view it with hope and optimism. Seeing AI as a powerful tool to solve humanity's greatest challenges. Others approach it with repudiation, fearing the existential risks in advanced artificial general intelligence could potentially pose. The truth lies somewhere in the middle. AI is not a savior nor demon. It is a profound technological shift that will undoubtedly transform nearly every aspect of human society and endeavor. Just as past revolutions like the printing press, steam engine and computer change and the computer change the world in unimaginable ways, so too will AI append long held assumptions about work, knowledge, and what it means to be human. But AI is not some incomprehensible black box. At its core, it is an ingenious method of processing data, recognizing patterns, and making predictions or decisions based on those patterns. The AI systems we interact with daily from digital assistants to content recommendations all use different techniques like machine learning, neural networks, and natural language processing under the hood. As we'll explore further today, AI's current capabilities are already immersed, yet still narrowing scope compared to human general intelligence. But the field is progressing at a blistering pace, which is why even this introductory test, text that I've just read to you was not written by a human at all. It was in fact composed entirely by an AI system, specifically the language model called cloth created by anthropophic. I simply asked it to write an engaging intro about AI from the perspective of myself giving a presentation. Took me five seconds to get this. Always an A+. I did this to show how disruptive AI is going to be in the workplace. We still need to know what we are doing at this stage. I can tell you that the PowerPoint presentation that I brought was made by me, not by an AI. But to make sure that I wasn't missing something that was important for me, for every slide I asked AI, what would you talk about in this topic? To come with that, I would say, this I don't like, this I do. And then I would write and start thinking about what I'm going to do. This is not a Champlain College Online advertisement session, so I'm not going to go there. But we are embedding a lot of this in our processes. And if we don't, there's no way we can remain competitive. The human resources are the most important costs in an organization like a college. We need to make use of people. And for now, this is allowing us to remove from people the type of work where they are not using their brain much. And we focus on the skills that they have and the things that we really need them to. But as we will see today, things are changing dramatically, and the world is changing. So let me start with my agenda for today, a little bit about what is artificial intelligence, a very brief history of AI. Talk about some implications of AI in daily life, some ethical considerations in AI, and then talk about AI and automation together, what they mean for impact on job and society. And I promised, because I know that we have a fixed time, I promised to try to be brief during this presentation, because I know this is a topic that usually brings a lot of questions. And I think that it's more important to answer questions than to just make the usual presentation and talk about the usual things. Before I do that, I apologize for forgetting to say a little bit more about myself. I appreciate the introduction, but there's an important aspect that I didn't include in the part that I mentioned, the bio that I share, which is I'm Portuguese, and I came to the US in 2018 to work, for example, in college. So I do ask that you all keep in mind that English is not my first language, that the American pronunciation is not my first English pronunciation. And so if there's something where I'm not clear or anything like that, please do interrupt me if there's a clarity problem. For questions, please leave them to the end, just to respect the format of these lectures. But if there's something I'm saying that you just don't get it, please let me know. I'll appreciate it. So what is AI? It's the simulation of human intelligence in machines. That's the idea. And the idea is to use that for tasks like pattern recognition, data processing, problem solving, and the ultimate goal, autonomous development of concepts and relations, which means autonomous learning. The machines learn by themselves. That's it I do here. I'm going to do something that I often do in my classes which is skip to the different slides that I'll come back because I just, I want to talk about this because I talked about pattern recognition. And I want to highlight the dangers of AI if not well used through pattern recognition. These two images were generated by AI by the way. I asked them, I said, well, I wanted an image and they just created them, which is cool and fast. On the left side, you see a landscape with some military tanks. And on the right side, you see a landscape with no military tanks. And this is a reminder for myself to tell you a story that happened in the transition of century. I don't know how many of you know, but well, now we have the F-35s that are the proofing, but back in the day, we were talking about F-16s. And the F-16 is sold to the nations in very different formats. The plane is the same, but the configuration is sold to different allies in different ways. And then each ally gets to develop software for the F-16 and then they try to make the Portuguese try to make their F-16s better than the Spanish and the Spanish better than the Portuguese. And we have our small technology wars in there. This case actually wasn't in Portugal. We had a few others interesting, but this wasn't. The goal was to have the F-16 fly over a field and identify tanks that were camouflage. And how to do that? Back in that day, we were talking about supervised learning. Supervised learning is you give a large data set to the computer and you tell them this is images like the one on the left. This has tanks in there. Then a ton of images like the one on the right, this does not have tanks in there. We feed thousands of images like that. The airplanes went through the fields and took pictures and we gather all of that and send it to the system and we tested the system. The way we usually do this in AI is we take the data set that we have and randomly select half of it and leave the other half on the side. We train the AI in this house and once it knows it's learned, we test it on the other half that we left on the side to see if it's working. So testing and the AI learned how to recognize the situations where there are tanks. Everybody was excited. So it's installed in the F-16s and the generals come and there's this demonstration and nothing works. So let's see what happened. And when I asked the AI, I was very careful now I asked for these images because here there's two things that we can notice in the images. One is that one has tanks and the other does not. The one that has tanks is in a sunny day and the one that does not have tanks is in a cloudy day. What happened was that the airplanes flew over the fields with tanks on sunny days and then the tanks were removed. Days got cloudy. So the days where there's no tanks, all the images was cloudy days. So what did the computer learn? If it's sunny, there are tanks. If it's not sunny, no tanks and it works like a charm, but not in real. And this is where AI and the quality of the data sets makes a huge difference. But things like this, pattern recognition are also what allow systems for face recognition in airport where for more than 10 years now, you heard on my presentation that I worked a lot in cybersecurity, for more than 10 years now, systems, AI systems can differentiate people far better than humans can. And so pattern recognition is about a ton of things. And as I go into the brief history of AI, I placed those two images of two papers that I published back in 2006, the one on the bottom, 2007, the one on top. Well, to remind me first that I'm getting gold and I haven't been doing this for a while, but also to talk about different situations where pattern recognition is really important. The one on top is about cybersecurity again. How do we know the person that is coming to the computer that is the actual owner of the computer, the user that should use it? The one on bottom is how do we look in a hospital into the lab results of a patient and identify if they have prostate cancer? And if so, what's the likelihood of survival? This was back in, the one on the bottom was back in 2006. And the goal wasn't as much about getting those results right as it was for this paper to see if this particular technique of artificial intelligence could be used for that. But still, we got the precision of 91% and that was 18 years ago. Today, it makes zero sense to do healthcare without artificial intelligence. I'm going to have to adapt the story because I'm on camera, so I can't use the language that was used in one of the meetings in the hospital. But we did an experiment with AI in intensive care unit to predict survival rates for the patients there. And it was all about unit tests. All the tests that they had about the patient. And the way this worked back then, now we don't even get to see it, which is what makes it more complicated, was that the system generates a ton of rules. And then humans like me would go into the rules and say, okay, if I take this rule, make it simpler, how much do I lose here so that the system still work well? And to do that exercise, back then we were talking about what we called expert systems. We would call the experts into the room to help us go through the data and see what happens. What's happening here? What are you seeing? We have this and it's working, but why? And an older doctor back from the bottom stood up. And with a more clear language than the one I'm going to use, said, that line down there, that's the urine, right? Yes? I was always told in college that good urine, good doctor. So the E was the one able to identify in that data that the rule for urine was the one that would probably be the most important it was. And it turns out the explanation was that the kidneys are the first to fail and so that's a solid indicator. So all this to say that back then we were there in the box in the middle, late 90s and early 2000s with the resurgence of AI when we started having more computational power. But then we were working on more computational power that doesn't compare with the power that you have on your phones right now. So we were talking about expert systems, things that were designed to do one task, go into the ICU and identify which are the patients that need more attention. Go into an F16 and identify tanks. Very limited situations where we would try to have the computer be as smart as the experts doing that thing. The reason why I call this lecture the birth of a new era is because we are now trying to have systems that are as smart as all the humans combined, which is a different level. But the brief story is just because I really want to make sure that everybody understands that this is not a new problem. This starts in the 50s with the same people that started computation Alan Turing was involved in the birth of AI just as he was involved in what we now call a computer. 65 to 74 ton of development what we would now consider simple things but ton of development. And then what we call the winter of AI. It didn't really, it couldn't make much. It didn't do enough to justify investing. There was no business case behind it. The development wouldn't allow any real application so there was no investing where there's no money there's no evolution. Then the expert system start in the 80s and I know that the formal history of AI says it starts there and then it kind of stops around the 90s but it's not true. As you just saw, we work in expert systems pretty much until six years ago we just, it was just a little bit more of bigger experts but still the same thing. I placed there the 2002 Roomba, the vacuum cleaner that were vacuumed alone because I see that as the first real life mass application of AI. It's a vacuum cleaner that finds out where the corners of our house are and then tries to figure out what's the best way to vacuum this thing. Was it great? Oh, I still hate mine. But it's the first time that we saw AI coming into our houses and we are buying a device and we don't even think that it's AI. It's just a vacuum cleaner that helps our life. I think it's also good to show that it really helps our life even though I still have to vacuum once a week. But it's getting there. 2010 significant progress in visual recognition, this allows for a ton of wonderful things. We get an MRI and now one thing that is important to understand is that a computer can see the things that we can't because it can look at details that we can't. We can't look at the skin of a face and see every tiny inch of it and see the first step in the skin cancer. What we already have are apps. My wife uses it constantly and the doctor loves it. That we take a picture of the mole, AI goes through it and tells, well, you need to see the doctor on this one or this one is fine or can you take another picture of this one in two weeks? And we often have cases because there's humans behind it improving the app where the picture came, take another picture in two weeks, but in two days we had an email saying, we looked at this, you need to see the doctor. And so far that app missed zero times. All the ones that it said for the doctor to see, the doctor agreed and removed them, all the ones it said it's okay in a periodic check with a dermatologist, all clear. But it can look at every single detail if we get the computational power to do that in a way that the doctor looking at your body can't go into that detail. So facial recognition, any type of visual recognition is going to be super important, but it also created a step to allow for robots to become real beyond what we have now. We already have a lot of automation. We go to a car manufacturer and you get the robots that are specialized in doing those movements, but those are programmed by humans and we tell them, okay, you need to do 45 degrees and then turn the right and that's what we do there. But if your computer can see and learn the path by itself, then we are one step closer to having a robot that can do whatever we can. And we are getting there. We have, they can walk in a funny way, that's true, but they walk, they run and they can walk forever while I can do a mile and I'm done. And so in the military the robot mules already carry a lot of weight for the soldiers, which allows them to get to where they need to be with a level of energy that they wouldn't if they were carrying 60 or 70 pounds on their backs. But all of that, it's very different to have a mule that follows you or have a mule that can identify everything that is around, warn you if they see a sniper, which is so hard, snipers identify other snipers, everybody else can. So we have that, keeping an eye on my track time. So right now in daily life, we have things like personalized recommendations. When you go shopping online, they keep telling you probably like this, you'll probably like that. So you get those recommendations. Natural language processing, so a system that can understand what we are saying and reply to us in English. From voice assistants to chatbots, real time translation, because Samsung now has a phone that's a commercial on TV. I invite everybody to see if you haven't, that you get to a foreign country, everybody is crazy because nobody understands anybody, but you talk to the phone in English, the person listen on the other side in their own language, and it's real time. And this happens now because artificial intelligence can do real natural processing language. You might ask yourself, but weren't we using chatbots five years ago? We were. The difference is when you, if you would go to Amazon to chat and talk with the bot five years ago, it would look for words, they couldn't care less about what you were saying, but they would identify a word that would tell something that they would hope would tell something to the system and then give you an answer about that. Seems like you're trying to know about the discount. No, I was saying that I got a discount on this, but now I have all of these problems, but it caught that word and act on that. Now it has the ability to make meaning out of what we're doing. Meeting notes, we could have AI right now recording everything I'm saying at the end summarizing it for us. The challenge in situations like healthcare, education, is where is that data stored? What happens with it? So if we think about healthcare, healthcare data is protected. So if you are having a meeting, bunch of doctors talking about the patient where if the data is stored in the cloud, how are we making sure that EPR regulations that protects the patient are being met? In a college, student data is protected. So if you are talking about the student, where is that data being stored? So the ability that AI has is being confronted with all the other protections that we must have. And that's where the rule of governments should come in. And it's very hard to get it right. So you go into the European Union and they have the general data protection regulation that says, for instance, that if a decision about you is made by AI, for instance, you ask for a mortgage for a house and it's declined and it was generated by AI, the person has the right to ask for a human to see it. Makes sense. Now my question is who is going to be the employee that has the guts to override the system and say, no, no, the system says I shouldn't be giving a mortgage to this person but I'm going to say yes. And then if things go wrong, it's not easy to find the right balance. Oftentimes we forget about the human nature of humans and we also want to keep this in a way that makes sense for businesses because all of this research comes from the intention to make money, like almost everything that we do in life. Content creation, my speech. We now are looking at what it means, for instance, to create an online course. So let's say that we want to talk about moral philosophy. If we look at 100 different courses on moral philosophy, 90% of that content is going to be the same in all of them, just changes the way we are saying it. Everybody's going to talk about the same philosophers and say the things that they said. So why are we asking our experts, our subject matter experts, our faculty to spend a ton of time writing that 90% that everybody that is in the field knows? We get AI to generate the 90% and the expert reads all of it, checks that it's correct and then brings the 10% that come from their expertise. And so we are making use of the brain instead of the copy, pasting and the typing and spending all of that time typing. But it generates a challenge in differentiating products in the market. If everybody is creating vacuum cleaners based on the same AI model, all vacuum cleaners are going to be the same. So how do we differentiate the ones that were made in a country somewhere at $2 an hour of work labor versus the $10 an hour in another place? How do we ensure that this still works? And as you can see, I have more questions than answers. Automation smart appliances from the Rumba to the Alexa that helps us self-driving. We have Mercedes and Tesla really advanced in self-driving. It's now more of a regulation aspect that it is about having the ability to do it. And again, another thing that came with that is only possible because of computer vision. And one thing that I want to highlight, we all fear, at least I do, that I'm going to get in a car, is the car going to get in an accident? And we think, is the car safe? I just want you to reframe the question. It's not about, is the car safe? Is the car safer than I am or whoever was going to drive it? So let's say that I went to a bar, I was happy, I drank too much. Is the self-driving safer than the state that I am? 100% yes, right? Now, if I'm normal in my normal state, do I want to drive myself? I still do, but that's because I'm human. But if I'm thinking about it, the likelihood that the machine is going to see a child running from the side or something like that, that the machine is going to see it before I do, it's much higher. And so it's not about, is the machine safe? Meaning it's not going to get into an accident. Is it safer than humans? And we are, in fact, at the point where it is and that's why we'll see Mercedes probably have a license to have full self-drive, no hands on the wheel very soon. Ethical considerations, I'm looking at the time and still doing good. Ethical considerations in AI, privacy concerns. I touched a little bit about this, right? We're talking about systems that to do what they do, need to go through a number of records that is mind-blowing, to be able to really see how prostate cancer results in, therefore not, what treatments work better, which ones didn't. What ages, social profiles and all of that are more likely to be well-treated in a hospital. What is the course that I should give to a student after they take introduction to statistics based on where they did well and how, where they did not so well inside that course? What's the next step for them? To get all of this, it takes a ton of data. But it takes a ton of data to be able to do the education, a ton of data to be able to do the health, a ton of data to be able to drive. Tesla is where it is in terms of self-driving because all the cars that are out there are capturing images and the way they train the system up until six, seven months ago was, okay, there's someone behind the wheel. Something happened and the driver hit the brakes. So now the system is learning that on that situation, you need to hit the brake, but you need hundreds of thousands of people going through the same situation and hitting the brakes or not hitting the brakes and getting into an accident for the system to learn. So that's another huge system. But now if you start thinking about a universal intelligence AI, it needs all the data for the health, all the data for marketing, all the data for healthcare, all the data for driving, it needs all the data in the world. And so all of that data, where is all of that data? And is it in our own country? Is it somewhere else? In our country, who's accessing it? How are we protecting it? Because all of the sudden, we might start having insurance companies thinking, I don't want to ensure that person based on AI and the data that we have on their DNA or the blood tests, this person has a higher likelihood of dying. So I'm only going to take the very healthy people. I remember that there was a doctor that had a really awesome record of survival in their surgeries. But the doctor was a well-known doctor and he could afford to select his patients. So the fact is that he was only selecting patients that he knew he could save. And so yeah, he was doing great. It's like me, I never had a patient die on my watch because I never operated anybody. So privacy concerns, where are we keeping all the data and all the results that come out of the data and everything that is related to access to that data? And we start thinking about countries that are not democratic and how they start thinking about what we do and what we say and how is that being processed? And the funny thing is that I'm saying this to this group and the group that is on Zoom, but then this video goes online. And the AI can go through the video, capture everything that I say and then attach it to me. Now they know what I think in April of 2024 and they can go and compare with other lectures that I gave five years ago, 10 years ago and see how I'm progressing. And one day somebody might realize that this guy might be dangerous. We didn't have that problem 30 years ago, just like we didn't have the problem. Luckily nobody knows all the crazy things that I did when I was in high school. Not even my mom, thank God. We don't have that luxury right now. But my mom can't go right now to chat GPT and say, what was Zezu doing in his high school times and chat GPT in five seconds comes and tells you what I was doing. But in 20 years, I can go and ask my daughters called Anna, what was Anna up to when she was in high school? And it can go, search the web, Facebook, Instagram, everything that she did, plus school records, whatever they have access and in five, 10 seconds process that all and come and say if she was a really nice girl. Bias and fairness, systems perpetuate and amplify existing biases. That's the doctor case, right? If he selects only the good patients and we say, okay, the patients with low blood pressure should go to this doctor because he does a really good heart surgery on patients with low heart pressure. The system learns and keeps sending to this doctor the patients with low blood pressure. And so this perpetuates. And so back from way before AI, I had a computer science teacher tell me computers are done. You give them good things, they generate good things. You give them trash, they generate a ton of trash. If we insert data as it is, it's going to amplify it in the way that we have it. So let's think about our society. And I'm not going to go into politics and I don't even know enough about US to go into it. But think in your heads about the societal group that you imagine that will be in a well, a good financial situation and will always pay their loans. And then imagine a group that lives in a challenge area, bad zip code, where only 75% of people pay their loans. If we think about these extremes, what is an artificial intelligence system going to conclude? That we should give loans to the people that have the ability to always pay and avoid all the others. So by doing this, there's no access here. So when we do have a human give some loans here, it's going to be even worse. And so the system keeps splitting the groups. So we need to think about what it is learning and we go back to the tanks in the field. Is it learning about what we want them to learn? Or is it learning something that allows it to get to the rule even if it's just, it's sunny, there's a tank. It got it right. And we checked, it was right. But then in real life, it learned something that makes no sense. Actually in education, we also often see that type of alternative construction when we teach students and they think, oh, I got it. Because we thought they learned something, they applied to the exercises that we gave to them, worked just nicely. But it wasn't what we were trying to teach them. And by the time we get another different situation, that's when they realize I don't know this. But in humans, small groups, right? AI, millions, all served at the same time. Transparency and accountability, how do we have experts that go into the system to see the rules? If we have reasons to doubt, how do we audit these systems? Security risks, there's a ton of them. But if you think that the typically alarm systems in houses are connected to computers and computers are connected to the internet and all connects, you can see how we can have that type of thing. You can also imagine a robot doing a surgery and if something fails. So we have security and safety concerns associated with the use of AI. Again, we need to reframe the question to, is it better than what the human would do? But they are. And there's problems with users like cyber warfare, misinformation, all of the things that come with the ability to generate a text in five seconds. The images that we saw there in five or 10 seconds, you can generate a PowerPoint if it's on a basic topic right now. In 30 seconds, you just give it the topic, it generates the PowerPoint. You say, I want 15 slides on this topic, it does that. It's not great yet, but if you compare what you have now with what you had three months ago, the jump is exponential. And so it's amazing what is being created. Surveillance and control, we know that the people, people's Republic of China is using AI to do surveillance through all the cameras that they have across the nation. And they are cataloging all of their citizens as very good citizens, half good citizens, the ones that they need to pay attention to, so surveillance systems and control of systems. But we can think about nations, but we can also think about a meal. You go into a factory and there's an employee that goes too often to the bathroom. Maybe I don't want to keep him here. There's another employee that, I just saw, I think it was this morning on Bloomberg yesterday, there's a restaurant, one of those big chains, that changed the layout of the stores so that employees will take less steps during the day. So talk about, but now my question is, what do they do with the ones that actually take a lot of steps because maybe they are not so good in orienting themselves in that thing? So there are concerns around this. Impact on social interaction, if we start talking with AI friends, instead of people, what does that make of us? If in five years you don't want me to come and talk with you, you want the robot to come because he knows it all and he actually has all the answers and he can do better, and maybe we have holograms and we all sit in the room and we see the robots right in our living room so we don't go out of the house, what does that mean for being human? What does that make of us? And the environmental impact. The amount of data processing that this thing requires means that we are building data centers at a speed that is absolutely amazing and they need an amazing amount of electric power. So it's funny that we are trying to get away from fossil fuels and we create electric power still a lot through fossil fuels, so we buy an electric car then we power it with electricity that was made with coal because we still don't have renewals to do with that. So we are trying to fix that and we are trying to have more renewals. At the same time that we are trying to have more renewals we keep spending more and more and more electric power at a pace that we can't grow the renewals at. And I don't know the exact amount of energy that goes into one of these data centers but it's absolutely astonishing. Then I just saw one that is being created that costs $200 billion to make. And so if we wanted to have one of those big data centers here in the Burlington area we don't have enough power for all of us to feed that data center. So what does that do to our environment? Well, how are we going to create? I saw that there was a, Butan is creating one for Bitcoin mining compare and we talked so much about Bitcoin mining being bad for the environment. Compared with Bitcoin mining this is the sun compared with our moon. And so how are we going to keep feeding this as we grow more and more and more and more? So that's that. Jobs and society I know that I'm running out of time and so I'm going to just say this. And then I'll keep this slide for a bit and then I'll change to the other one while you ask questions that if you want to go back to the recording and see what's there. There's absolutely nothing that with enough computational power and a little bit of time to improve our algorithms computers can't do better than us. The only limitation is, we move around, right? We see, we combine that with robotics and any work that we do, computers can do better. So we need to start thinking what society looks like. The first phase is easy. Champlain College launched launching, just launched marketing analytics, healthcare analytics, finance analytics, a ton of master degrees. Let's educate people so that they can get jobs and take right away, right? First phase is going to be like all others. We lose jobs here. We create other jobs there to support the growth. That's fine. But there will come a time when I don't need a computer science. Right now we use AI to help a computer science do it faster. We'll be in a not too far future in a situation where we don't need people to work. We need to start thinking what the society looks like when nobody needs to work. And that's it. I'll take questions. Yeah, you can use the microphone. We have a question right there. In, can you hear me? Yes. In the aspect of the AI application that you're using in information for say like a security. One of the biggest problems that in our family and in a lot of people I know is you have these hackers that will contact you and try to extort or somehow another trick you into giving information or any activity that would put your finances or your family in danger. Is there a possibility that some kind of device for picking up trends because they all use somehow or no they have a connection and they all use the same language. They all use the same methods. If you had a device that you could simply hook to your telephone or it could be built in. And alert that. They'd go bink and all of a sudden the light would come on and go, this is a scam. Is that possible? Yes and no. The same exact logic is a logic that we use in servers to detect spam email and scam emails. We all get spam email but we get less than 1% of what actually is coming into your inbox. Servers are filtering, detecting and saying, no, no, no, no. The problem with AI in the security space is the same problem as everything else in the security space. It's a game of cat and mouse. We can do AI tool that is going to identify those trends and alert you. But on the other side, we're talking about criminal organizations that are behind all of this and then they can use AI tools to see what the AI tool is using to detect them and give information and so we feed on each other. Cyber security is exactly the same thing. The traffic that goes through our networks is now so much, so much, so much that humans can't keep an eye on what's going on and see how we're being attacked. So we use AI to detect things better than recognition. This is not normal but then they use AI to see what the AI is using for them and they improve and we keep feeding each other and now we are getting to a point where the production of speech is so good in AI that we don't need a human on the other side talking with you. It's going to be a computer trying to scam you and the computer can use the better language for your situation trying to at the same time to avoid the AI detection. So it's a cat and mouse in everything that has to do with security including AI, cat and mouse game. We improve on the one hand, they come on the other and we keep building on top of it. Very interesting conversation. Biggest concern I have is the pace of the development of AI. Going forward it sounds like as you mentioned previously you said that it's exponentially increasing the amount of things we can do with AI without having any controls over it. How do we get the controls that will be able to keep up with that incremental change? It just seems impossible. It is. You have so many dedicated people both in government, business, private individuals who can just develop things that you can't control and nobody knows that it's happening because they're all doing their own by themselves. What can we do about that? Yeah, the first thing you can do and I think probably the only thing that you can do is vote. That's the only control that we have over our nation is government. We vote. But let's be honest, our politicians are not technicians. Most of them are not from a younger generation that was really trained in computer science. We are still discussing if we should have blockchain or not and mixing what is a cryptocurrency like Bitcoin with what is a layer of technology created to protect data privacy like some of the applications of Ethereum and making everything in the same bag and still trying to understand what to do with it. And we wasted more than 10 years in the blockchain fight and they haven't reached anywhere. If we do the same with AI, in 10 years, it's too late to discuss it. The other problem on top of this is that we pay a fortune to experts in the field, in the private sector to develop these things. But if we even think that the federal government is going to pay a fortune to an employee, they are paying that fortune with our taxes, right? It's, we are paying $500,000 a year to people in the federal government when so many people don't have anything to eat. That's outrageous. And it is. But at the same time, if we pay $100,000 to computer scientists to work for the federal government and an AI company is paying them for $100,000, where do you think they go, right? And then we feel the positions and we say, oh, we have computer science, yes. But if they were good, let's make an exception because some of them are there because they want the mission. There's people there that want the mission just like police officers don't make it, don't do it for the money. Like, professors don't do it for the money. My professors of cybersecurity, software development, I can tell it's not the money. If that's why we have so many that are alpha retirees, retirees because now they can afford to live, to have a faculty salary because the private sector. So there's an exception of people that are there for the mission. But the large majority are there because they don't want to bother to have the type of salary that pays four times, five times more because it takes training, constant training, more training, accountability and results. And so again, this also comes with that challenge. How do we really think what the role of the federal government is? How do we make them efficient because they can't keep spending more than what they make and they can't keep raising our taxes. So they need to become efficient but they also need to be able to have competitive salaries on the things that are strategic. And I'm not sure that following the model that we have for the fans where we have private contractors is going to serve as well in the field of AI. You had said a little bit earlier that at some point in the not too distant future AI and machines will be able to do all kinds of work. But you said earlier that the way that AI operates is it learns from what's been done in the past. So part of every kind of work is innovation and invention, which to my mind involves the human imagination. So if AI is doing all the work, will there ever be innovation? Yes. And that is a very good question. I don't have much time but I'm going to try to fit in so that it's in the recording but I'm willing to stay after we close the recording to answer anything else. That has been the argument that so many have made in the past 10 years. So for why we will never be replaced by computers? Tesla used all of our driving data to train the systems to know what to do. Up to a year ago when they fired their entire team that was doing the classification because now they have learned so much that the system can self validate and do the testing. The way artificial intelligence works is has been from the day one of the phase that we are now trying to mimic the actual brain. I was working with neural networks when we had the ability to have eight neurons. That's all we had. We were doing this with eight neurons. Now, because it was computationally we couldn't do more, right? We need to test all the combinations been told in all the neurons and have them combined. Now with all the computational ability that we have that's why NVIDIA is doing so good because they have ability to do this so well. We can have systems that have more neurons than our brain. What is imagination? Imagination is considering something different. Most likely a variation of something that we saw before unless we believe, and I know that some do, that we get the inspiration from non physical entity. But most likely it's we look at a tree and we have a problem without a car and we somehow imagine a way of combining them. Something new that we didn't have pops up. And then we either forget about it or we go and create it and see what happens. But how do we create now in the real world? We simulate environments. We have a dock. We want to see what the boat is going to do when they arrive with the water in those conditions. We don't put the boat there, it's going to damage the boat. We simulate everything. So the machine now has the ability to think variation and see how well it would do. Art, well there's all types of art. But there's the art that sells and the art that doesn't sell. Machines can go and see what was common to everything that sold. Now let's make variations. And then we say but what about copyright? Yeah, I don't know how we're going to fix it but that goes back to the same point. We need to start thinking of our world in a different way. I know that we are right about time. One more question. On a very practical note as a potential user of AI just for looking for information or getting some analysis done, what do you recommend in the way of an app on an iPhone? I mean a couple years ago downloaded chat, deepen whatever it is and barred but they're out of date in terms of what I have. They say oh we don't do current stuff. Yeah, I don't use iPhones. I'm a European so I'm still in the Android phase. The iPhones are not so big there. But I recommend that if you want to give it a go with AI try it on something a little bit bigger just for the comfort. But any browser you can go to the browser and try not as apps, open the browser and go for a chat GPT. I'm going to tell you this and not advertising I don't have a commission. Pay for the 4.0 version because the difference between the three, 3.5 and the paid 4.0 is like night and day. I don't know how much it is, I think it's 15 or 20 bucks a month. Even if you pay for one month just to give it a try. Try chat GPT 4.0 and try the one that made this. Cloth, try cloth that exists the last. The beginning of March, cloth, the paid version, better than chat GPT 4. That was not the case two months ago. So wait for chat GPT 4.5. But cloth, chat GPT, if you do the chat GPT 4.0, paid, there's a thing on top that asks you to go there to see the world of chat GPT. You click there, there's dozens of apps to create documents, power points, videos to create images, to tutor you in science, in math, to review texts, the number of applications that become available to you. I can tell you that four months ago, there were four, now it's dozens. So I recommend give it a go. It's a good waste of 20 bucks. Try chat GPT 4.0, try cloth and ask questions and see the quality of what it produces. Wonderful. Thank you so, so much. Thank you.