 Hi, I'm Jerry Mikulski. This is a five-minute university about artificial intelligence and machine learning. I'll look backward and a little look forward Back it's since the earliest computers in the mid 40s People were trying to figure out how to model human thinking seems only reasonable To grossly oversimplify and tell the story this followed two paths Some researchers tried to emulate the logic of reasoning So if you have sniffles and a cough then maybe you have the flu Except when you have the chills. Maybe it's malaria. You can see how this gets pretty complicated Another bunch of researchers tried to emulate the biology the neurochemistry of learning and reasoning and they tried to map how Neurons work where they have inputs. They have some kind of threshold function They then send then trigger or don't trigger firing an input to some other neuron Etc. But in in these webs that turn into a lot of matrix math The first group were known as expert systems developers and the second group became known as neural networks or artificial neural networks The expert systems were interesting because you would sit a knowledge engineer down with a domain expert They would craft a series of explicit rules like if you have a chill This left a nice auto trail when the expert system made a recommendation because you could then figure out which rules have been triggered Neural networks learn from large data sets. You feed them inputs and outputs and then you say well with this input What would the output look like? They're very calculation intensive and occasionally gave amazing results But they couldn't really leave an auto trail. It's not impossible to figure out what a neural network did But it's incredibly difficult compared to expert systems Now in 1969 two big wigs in the field Marvin Minsky and Seymour Papert wrote a book titled Perceptrons Perceptrons nixed research in neural networks for a good 10 to 15 years because they Proved that neural networks were impossible and what they proved actually was that mathematically very thin neural networks Didn't have the mathematical complexity to model things That was true But what they were saying was that this field of study was wrong and they were entirely wrong What happens is a little bit later expert systems get better and start to find mainstream use But all of a sudden neural networks because of a few people still working in the field Found they they could start to architect hidden layers multiple layers into their systems And this became known as something called deep learning depth being the the interesting word here They got they got some boost because instead of just using Computer CPUs they got to use graphic processors GPUs and that added a lot of speed and they got breakthrough performance in some Narrow domains, so if you really applied this intelligence to something specific it got really good out of this We got some variants of neural nets called convolutional neural nets recurrent neural nets and a series of others Because I bear a little bit of a grudge against the expert systems folks I tend to think of artificial intelligence as the expert system stuff the rules based off machine intelligence as the broad category and And machine learning as a category for neural networks Although it could be that machine learning is all the broad category that encompasses everybody, but that makes it a little harder to discriminate This is me in the late 80s at new science associates where I was a research analyst about to go give a presentation Those are acetates that I have my presentation printed on and this is a report I wrote in 1988 titled neural networks prospects for commercial use I have a little bit of history in this field and I loved discovering neural networks and writing about them back in the day So what happens then? Expert systems got better too, but not as much Machine learning started to take off because we started to get gigantic data sets GPUs gave way to TPUs tensor processing units and other special purpose hardware that could do the matrix math and other things much better And we got breakthrough performance in much broader domains. You've seen it some yourself with GPT for and Stable diffusion and a series of other things and here the models are called generative adversarial networks large lines language models transformers GPT is the generative pre-trained Transformer and latent diffusion models like stable diffusion for artwork. This is a thrilling sort of era and What I think is happening right now now your mileage may vary on these conclusions But I think a this is real this this moment for neural networks and machine learning is extremely real These things are very powerful. They're very helpful. They're easy to misuse, but when used properly, they're fantastic Machine learning eats tasks not jobs meaning everybody's worried about the replacement of a complete job It's really hard to do somebody's entire job. It's pretty easy to peel off different tasks Three of your six tasks that are important to do get eaten by some sort of technology We're gonna have to redesign your job or let you go or have fewer of you or something like that So it's really important whether we approach these technologies with the intention of augmenting humans or replacing them This is a big philosophical question Unfortunately capitalism pushes us really hard to replace Corporations have been trying to get rid of full-time employees for years and years and years and doing a very good job of it But this is a really important philosophical question, but I think our future is cyborg I think human in the loop really matters and by our future is cyborg I mean a blending of human and machine and I don't mean through prosthetics or some kind of man-machine interface At least not at this point. I just mean through software that you can consider an extension of your capabilities a superpower added Then if you have a sort of a specials touch for spreadsheets or Photoshop or some other piece of software Where you really are no longer thinking about the software or procreate and all of a sudden things come out of you That are that are expressions of what you're thinking you're already well down the path of being a cyborg now This cyborg thing around all the tools that I just described have a growing flourishing ecosystem There are all kinds of interesting variants and power tools and enhancements showing up But we're not getting to sort of general purpose smart intelligence like humans instead Not being this not being like a human is really not a barrier. There's plenty we can do The system is self-improving now in the sense that we can give these systems All sorts of prompts and and help to start solving the gaps in the problems that are showing up in the system Fusion across these systems will happen part of the problem is that you don't want to give a neural network You're accounting your bookkeeping neural networks are not precise They're just a map of neurons that seems to understand its inputs You want something that'll actually always give you four for two plus two But when you start to fuse when you start to fact check the results given to you By machine learning then you could get to some interesting places when you start to connect a series of these things together You start to get higher level reasoning that in many cases could make far better decisions than humans can that's interesting territory So we're heading down the road either toward a utopia or a dystopia depending on how we implement these technologies That they're that powerful Whether we end up in the you or the D really depends on the choices that we make today the choices that you make today Thanks for listening. This is one of several five-minute universities I encourage you to make one too, but in the meantime Be a good cyborg