 Welcome back and happy new year. I haven't been attending talks over the past three weeks. Had something going on on Sunday but it's good to see all of you back. Not sure if any of you are new faces but regardless. My name is Kathik. I am a graduate student and what I do is I deal with a small subset of artificial intelligence called deep learning and I apply that to medical and image analysis. I am pretty new. I can't say I'm really new to the field. I've been learning about machine learning for about three years and applying them since I was doing my undergraduate studies in mechanical engineering. I took a final year project in you know data science and machine learning related stuff and I was applying these techniques for a while. It's been quite an interesting journey because you know when you're learning about machine learning and deep learning so many of these resources concerning how these things work and you know all the way from the theory to the code is available on the internet and it was just amazing to me that you know something that was considered so profound in the past artificial intelligence is actually so accessible to someone with a laptop or you know a low spec computer and I just felt that it would be interesting for me to talk about some of the techniques that I use in my research and just explain how these things work very basic overview. Part of the other reason I wanted to give this talk was because in the past I've given a talk in the science circle about computer vision that it's how computers perceive images, how they classify images, how they find objects like people and cells and so on so forth in images and I thought it would be interesting because I made a note about deep learning to talk about it a bit more deeply and so just to be a quick overview, deep learning is a subset of machine learning which is a subset of artificial intelligence and basically AI works to enable machines to sort of copy human behavior so whether that's in things like decision making or even our creativity you know that's what it tries to do. Machine learning is the subset of artificial intelligence that involves learning from data and this presents a lot of unique advantages that I'll talk about in a bit and deep learning mimics or tries to mimic our neural behavior in order for us to learn from large amounts of data and it has a lot of potential to for both automation and data discovery and I'll talk about this a bit more but when we think about artificial intelligence I think it's you know obviously the references that come to mind immediately everyone's thinking about terminator and one thing I noticed that's very interesting is that we are all looking thinking about cyborgs the ones in Star Trek like you know the bog if you if you're like me and you've been watching Star Trek Discovery control was a topic that was mentioned in season two data has to be one of my favorite characters in science fiction and he's always trying to to understand human behavior human emotion he's always trying to figure out things and it's it's just so interesting to me how these things are how these things have evolved in media to both you know in a both a light-hearted manner and a scary manner that gives people the impression that artificial intelligence is going to destroy the world and all that stuff but that's interesting and this this talk basically aims to to talk about the basics behind these machines obviously these are very complex systems the ones that have been depicted in media but through this talk I hope to introduce some of the concepts behind image recognition text recognition you know how do how do these things understand data the words in our sentences per se how do these things manipulate them and then reproduce them so I hope to introduce some of these concepts so these when you see these things in media you understand them a little bit better so this definition was introduced by one of the pioneers in machine learning Arthur Lee Samuel in 1959 and he defines it as you know giving the computers the ability to learn without being explicitly programmed what do I mean by this in a traditional computer program um you would usually develop a set of rules you would tell a computer to examine a problem and you would tell it you know uh you would examine a problem let's say let's give an example um spam recognition in your emails when emails go to your spam filter um or your your spam mailbox what normally happens is a computer tries to find out of to to apply a set of rules to the the mail subject of the content of the mail and then it sends it to the spam filter so you could program this right you could tell a computer look in the email subject I want you to look for certain malicious um sets of text like let's say um loans or or dollar signs that's awkward right in a in an email subject or things like amazing and free so on so forth and if you see and you know you could tell a computer if these things come up in the subject line and if you see them then send those emails to the spam filter now obviously you as the user you as the developer of the program you have to understand what these anomalies are what are those things that defines what spam and what's not spam so you have to develop these rules and then you have to tell the computer what they are and then you have to tell them to automate this sort of filtering but that puts a lot of load on you that's a lot of data that you have to produce a lot of data that you have to look for a lot of data that you have to find maybe you have to perform a lot of surveys you have to get it you know put down into an excel sheet and then you have to upload that excel sheet into the program and then you have to run that and that's extremely time consuming it's very resource consuming and you know that by the time you've launched the program you don't know whether it's even relevant because this data changes over time as people who send e-spam emails become more smart and they recognize that you know you're not reading them your phishing emails might be the the subject lines might be updated the content might be changed so on so forth these programs become irrelevant what if there was a way for us to automate this you know development of rules and that's where data science comes in so you could take vast amounts of data you could get them from the internet internet you know there's this big thing a big thing called big data that's going on now right and and you could get these data patterns you could basically survey people you you could get a vast amount of these spam emails coming in and and you could feed them into a computer algorithm and you could have the computer learn from these large amounts of data um what is it that people are putting into their spam folder and what is it that people are not putting into their spam folder and it could find those differences you know those nuances in the data that causes people to decide to to put it in the spam folder not in the spam folder and from that it could automate that um you know that transference and so this this use of data to learn patterns and to apply them to programs you know to examine problems uh that's what machine learning is you know so a very simple example of this is for those of you who have done uh some I think maybe in high school science or before that even there's some very simple experiments in things like velocity time where you know you would um lift a ball to a certain height and then drop it from that height and find out how long it takes before the ball stops bouncing um loss of energy per se or you know how fast how long does it take for for um a car to travel down a slope right so you release it and then you you let it go and then you find out how long it takes to travel from one point to the other and you would record these data points on a graph graph sheet has anyone done this besides me can't be just me right and and you would record these points on a on a graph sheet and then you you would basically take a very long ruler hopefully you add a 30 cm 30 cm ruler and you place it and you would try to measure what's the best fit to to you know put these points you know so that so that the that that line reflects the minimum distance between all the points to put together that error is minimized and what you were doing is actually what a lot of machine learning algorithms do which is called linear regression and linear regression analysis is basically the attempt to predict one dependent variable or target and a series you know from a series of other independent variables or features for example um if you wanted to predict the the price of a house per se you know you would take variables into consideration like the size of the rooms you know the um and you could put these into a plot and you could basically compare compare them so the dependent variable would be the price of the house the independent variable so the the features that you would use are the the size of the rooms the location the location is a categorical variable so you might assign a number so on so forth but a computer is capable of processing these multiple sets of features and then determining which one contributes best to the dependent variable that's the that's the price of the house it's capable of automating this process for you and you can use this for you know various types of applications you can use it for a prediction per se in the midst of the covid pandemic crisis you know some of the factors that would influence the spread of the disease predictions regarding that how many cases you would have in the future so on so forth that sort of thing can be automated through these types of algorithms what's also possible is classification so you want to use a certain set of variables to determine whether a certain cancer belongs to a certain molecular class or certain um cancers are very heterogeneous um they're very they're very different subtypes in cancers uh in in the one that I study you know like brain cancers certain genetic subtypes um certain genes the expression of certain genes would contribute to things like whether or not the cancer responds well to radiotherapy or whether or not it responds well to immunotherapy uh how long the survival of the patient uh what are the survival outcomes of the patient and so um survival outcomes would obviously be a regression problem but whether or not the cancer responds well to radiotherapy or whether it responds well to chemotherapy that's a classification problem and so what you would do is you would then plot these features on you know access and you would try to fit a certain model a computer would try to fit a model the attempt is on the the leftmost picture to best separate these these variables into different um sort of clusters so that in the future when you have new data and you want to classify uh a certain disease per se or you want to classify um even abstract things like um animals or whether it's uh it's a it's a it's a car it's a um I'll talk about this a bit more later but but you know self-driving cars um they they classify things like um traffic signals traffic signals um buildings so on so forth so that's a classification problem there are two types of classification um algorithms one is supervised that's the one on the left it's where I've told the computer these are the classes I want you to automate this classification for me and the one on the right is unsupervised classification it's where uh the computer finds uh the mean correlations between the data it finds clusters in the data looks at where they are positioned in the plot and it tries to find these clusters on its own and it's very useful for data driven discovery per se the thing about this is I've told the computer what what are the features that I want to use um if I'm if I'm choosing if I'm deciding on the price of a house I would tell it that you know the size of the rooms uh a feature that I want to employ and so these these um or the location the distance away from a train station right so so these features are things that I have to know in advance and so that that manual input is something that I have to decide on what if you have a lot of data features like every column in this excel workbook is is a feature and if this was an example of um I think I was doing analysis on the cells in in tissue images and I was looking at the things the shapes like circularity and and the size of the nucleus and all that if a if a nucleus in a cell looks very large or um if it's very enlarged or it's very morphologically irregular that's a very high chance that it's a cancer so so so these features you know in every column I I'd be using them and it could be a big headache for you to decide that these features are relevant to the final outcome um that these features best describe whether or not it's a cancer or not a cancer or whether it's not and if you don't have an idea um what's relevant and what's not it's a big headache and that's where we can use something like neural networks and these things were basically inspired by um the the structures or these cells the neurons in the animal brains it's basically um there is a cell body and it contains a nucleus and most of the the the cells complex components there are these branching extensions they're called dendrites and then there's a very long extension called the axon and so um what basically happens is these neurons they produce short electrical impulses called action potentials and then these travel along the axons and they make the synapses release chemical signals called neurotransmitters and when the next neuron receives a sufficient amount of these neurotransmitters and then within what happens is very quickly it fires its own electrical impulses and so that travels you know through these neurons in the brain and these signals and and that that's sort of that's how our thought process works so someone decided that um this person called Frank Rosenblatt he basically in 1957 he he thought let's let's try and make an artificial structure to to simulate this and that led to the birth of something called the perceptron and it's one of the simplest structures in an artificial neural network the way it works is it takes these inputs and i told you about the the inputs like the how the the size of the bedrooms in the housing price problem and all that it takes these inputs and it multiplies them by a certain weight uh it may make a guess at the start of the problem you know as to what these weights are it multiplies them by a weight and then it adds them in a certain it adds them together with a certain bias and then it releases an output and it feeds that output to something that's called an activation function so an activation function basically takes that output and then it decides whether or not the neuron is fired or not fired so um if for example the you multiply the the size of the bedrooms as a feature by a certain amount or you multiply the distance from a train station by a certain amount and then you add those two variables together you get a certain um certain you get a certain value you feed that into an activation function that basically decides whether or not the house is going to be maybe class one expensive or class two inexpensive so that's a very simple simple um application for this sort of structure but we could expand this and i used this slide before so the theme is a bit different i apologize but we could expand this into a neural network this is what it looks like and it basically tries to emulate a human brain's neural network so the neuron as i've explained it is already it's a mathematical function and it collects and classifies information according to a specific architecture but what happens is in information is fed into these input layers this is where you put those variables your features the housing prizes the the cell morphology the you know various amounts of features can be can be put into this input layer and that's transferred to a hidden layer where um it's it's then multiplied by a certain set of weights the computer will alter these weights later on it makes a guess at the start and then a bias is added to every input after the weights are multiplied multiply and depending on the value that's output after you do this multiplication um it's transferred into an activation function and that determines whether or not the node the next node will fire um for feature feature extraction so um basically what's going on is the computer is is trying to figure out what those weights should optimally be for you to achieve a certain classification that corresponds to the classes that you have input so i've already told the computer this is based on based on variables like the cell size the the sum of uh the cell circularity the the maximum caliber of the cell which is like how much the cell is stretched or compressed i've i've basically told it that is i've told the computer this is either a cancer or not a cancer and so the computer will keep trying to adjust these weights and go back and forth uh in this in this classification algorithm until it gets the class that i've i've i've given until it's it's classification matches the classification that i've given so it's a it's a little bit weird to to talk about this in in the format of data you know raw data just numerical variables it's a bit hard for us to visualize what this actually is but what if we apply the same neural network to something like images what would the multiplication of weights look like in images and so um images are obviously when when we input them into a computer we get this two dimensional matrix so every pixel in an image corresponds to a number in the matrix in this image matrix black for example could be zero because it's the absence of color and white could be one per se or um so so we could basically convert these things to numerical values numbers and from there we could then apply these things called convolutions which is the multiplication of certain operators to these images to highlight certain features so that weight that i was talking about earlier you know that's multiplied that the the value of w that was applied earlier that would be uh sort of a convolution in this in this case and what you see happening is um in the first case the first scenario the the top image that convolution is highlighting the vertical lines in the image so they in in our sine circle logo we've applied this convolution and and um on and on the top the the top image you notice that the the vertical lines are highlighted when we apply the bottom convolution which is a different matrix altogether you see that the horizontal lines in the image are highlighted but what's the purpose of this right well if you think about certain images the like um let's say we are comparing a giraffe and a human what are the things that you will look for in a giraffe that's different from a a person the number of legs for example a giraffe has um four legs a human has two legs so in a picture of a giraffe you'd basically see a lot more vertical lines than you would in a picture of a human so the top filter the one that highlights the vertical lines would contribute very much to the differentiation of a giraffe a dog a cat or a human and we could apply these convolutions to then mathematically differentiate what these two images are computer could use them but if we don't know what these features are at the start it's a lot easier for a computer to guess and adjust these filters accordingly to just change those numbers and play around with them and go back and forth and back and forth until it manages to differentiate the pictures the same way we would and so that's the power of a convolutional neural network it's a power of a neural network that's applied to image classification problems and this is what the the architecture looks like i realized it's a bit abstract but broken down um intuitively this is what it would look like those lines that we saw earlier the horizontal lines the vertical lines the dots those those things are low level features and as you add those features together and as you go down the convolutional the depth of the neural network as you go down those lines those thoughts those those um circles they translate to mid level and high level high level features like wheels like lakes like years like a nose a face facial feature and that helps the computer to identify what the object actually is and it's used in things like facial recognition so when you use like snapchat filters right and you want the computer to put a set of years on your face or it's a very popular thing in instagram nowadays right so people are are taking these pictures and then they are applying certain facial features like the the the nose of a snout of a dog or something to their their face how does the computer know where to put it on their face it has to recognize where the nose is where the ears are where the eyes are first of all and that's how these things work those low level features that identify the lines the dots and all that they translate to higher level features through the through the network and that helps you identify and differentiate those images um the error in facial recognition I think what happened there was um in coded one one of the disadvantages of these neural networks is the data that we fit in has a very big impact on the outcome and so um it's very much possible that that there was not a lot of training data for people with colored skin input into the algorithm for it to recognize that that was a human face which is which which is very you know sort of unfortunate um a bias is what happens when you fit when you apply certain data sets that have that have that reflect our own biases inside let's say there's a lot more female employees hired than a lot more male employees hired by a company and then you put this this data and you tell the computer okay I want you to automate this process of hiring I want you to read certain data in in the CVs that are submitted by people and I want you to decide based on that who to hire and who not to hire one of the things that happened with amazon was that when um a lot of a lot of the training data didn't have um in in the in the education portion of the the the CVs it didn't have a lot of names like women's university women's college and so on so forth so when the computer was trained on this data and then it applied it to new data new employees coming into the company it hired very few female employees because the minute it saw um it saw that the they came from a women's college or something like that it didn't it didn't see that in its data set and it didn't fit them uh it didn't believe that they were fit for hiring so so these sort of biases that we have um ourselves inherent biases can be exaggerated by computer systems and it becomes very dangerous to then then employ them which is why it's so important for people to evaluate what these the data sets that these systems are trained on and so going you know that that leaves us very much into the advantages and disadvantages the neural network is able to automatically deduce features from images I don't have to tell it to look for a line a dot it does that itself and it's basically applicable to several types of data and um you know formats like the one that I've demonstrated here is images because I'm very familiar with that I use it for image classification but it's also applicable to things like text recognition you know there's something uh called natural language processing where it's capable of not just deducing um what certain words are but based on the order the words are placed in sentences it's able to generate its own um its own sentences its own creative sort of poems and all that stuff later on and so so it's not just and it's also applicable to things like time series data um signal processing uh noise reduction in in in certain signals it's it's use it can be used for several different engineering problems and these architectures are also capable of being adapted to new problems for example these low level features that you see in the image here you know you need a lot of data to learn these features these lines these dots and all that stuff but who said that if you are classifying cars you can't use images from maybe animals right you could you could sometimes you look at the clouds in the sky and you say that oh that looks like a horse that looks like uh an elephant that looks like uh my my uh dream car Lamborghini so so you know sometimes when we look at these clouds and we we see certain trends and we look we see we see certain similarities in new images what's happening is that the shape of these things you know the lines the dots and all this stuff that that's how we make these comparisons and computers are capable of doing that so there's this uh mode of machine learning it's called transfer learning it's a type of deep learning where I take one problem train a model that's trained on one problem and apply it to another problem so there's this thing called an ImageNet database which has a lot of images of cars of animals of objects like street lights and buildings and and all this data has been used to train some very powerful models and these models can be retrained you know using these basic level features to classify cancers and extract tumors and all you know it can be used for for several different things even in astronomy I think what's what George does you know I would imagine that a neural network is behind the architectures that they use but of course the disadvantage is you would still need a good data set because it might extract the biases from the data set and exaggerate them and then you would have a lot of ethical issues and it would be very hard for clinicians and all that trust such a such a classifier and it's not also easily interpretable in these structures the computer is making certain deductions on certain trends that it finds in the the data that's involved but it's not exactly telling you what these things are it's it's hard for us to extract the learning process and understand how a computer is learning from from the data that we've input and so there are different you know sort of sort of things that we there are different sort of solutions that we can use to solve these problems what I want to direct you to is at this point is this this platform called Google Collab I realize I should have gone into another slide first but okay first first things first these things require a lot of computational power that's one of the disadvantages it requires a lot of data and a lot of data requires a lot of a lot of computational power to process when I perform deep learning on my laptop I think the GPU temperatures go up to about 80 to 90 degrees and that's extremely hot it's not comfortable to type on at that point and it really can damage your your pc so obviously in order for us to get that computational power we need there are a lot of resources and one of the big things is cloud computing these days and this is accessible not just to me but to you as well Google Collab has provided this platform called Collab and it's an environment that allows you to code these networks in python but also learn from tutorials that people have publicly uploaded and it provides GPUs graphic processing units for people to run this code their their image analysis their their neural networks their deep learning models even machine learning models using their computational resources for a limited amount of time obviously they are they are paid tiers later on but you this is a very easy method for you to learn from this these um from other people who have applied these models to use their algorithms to try them and to apply them to your own data science problems if you have them or to at least tell yourself hey I have applied artificial intelligence and it's not that far away it's actually accessible to a lot of people these days and that brings us to interpretability so the black box in artificial intelligence problems is something that we don't understand at this point it's something that we've been that people have been working very hard to unpack in a convolutional neural network all these convolutions are performed that transformations the images that ultimately lead to you know making these images very differentiable in terms of their classes and so you want to obviously the the question you want to ask a computer is what are these transformations you are making that leads to the outcome what are the things that you are seeing in the data that leads to the classifications involved and based on that you are able to determine whether it's easy or not easy for you to trust the computer and so there's these there's this thing called grad cam it's called class activation mapping it's basically where the computer tells the user based on a heat map that these features this these locations in the image is where i'm looking at to make certain classifications it's where it's it's these locations that seem to possess the features like a face like um like the the legs the years and i've seen these things in the images and that seems to be making that that's how i've led to making this classification that's how i've differentiated the images as a human and a dog so on so forth and so you you know the caltech has this data discovered driven discovery institute where where they do they do data the driven discovery it's one of the one of the things that i've learned from deep learning algorithms is that the journey is more important than the end and it's this process of making these conclusions that may help us find out new things about the data set that we didn't know before there are actually there might be these differences in the cancer images that you know we we didn't know about that helps the computer makes a diagnosis that that that we even put so this might help us learn new things about our data new features new that like this maybe maybe the the cancer cells are are clustering around a certain sort of neuron a certain blood vessel maybe we didn't understand these things before because the data set was so large for us to take a look at all these images at the same time and we might not have known that in advance but by doing these things by interpreting the the the journey the computer makes to takes to make those classifications it's possible for us to find out new things about our data set in the first place and that might lead to discoveries and that's what that's what's more exciting to me rather than the process of automating sorry rather than the process of automating classifications the discovery that's involved in the journey to making those classifications is something that I hope to explore more in my research another thing that's really interesting to me is the use of these algorithms to generate data there's something called a general adversarial network and it's a very interesting architecture it's it basically involves two neural networks two convolutional neural networks in this case and they are playing a game with each other so the one on the left the one at the bottom left that's called the generator that's a deconvolutional neural network and what it's doing it's it's taking random noise which is like those the images that you have seen if you use the television and you you weren't getting signal a signal and then you saw all these random pixels on the screen that's random noise and it would take that and it applies the deconvolution and it makes those transformations to the pixels until it generates an image and that's fed into a discriminator and you also give the discriminator real images so real faces um I know someone has done this with images of avatars in SL there's a video there are some fake videos of Obama making a speech that was done by these networks and so what it does is it it's basically these two networks the discriminator and the generator and they're playing a game with each other the generator is trying its best to transform the image until it looks like a real face and the discriminator is you know trying its best to discriminate these images to to be able to tell this is a real this is a fake and you know computers are capable of generating their own images by by doing this by playing this this game this general adversarial network and what I say next might be a bit dangerous but someone's actually used a neural network to even generate um fake speeches um so so this was this this is an example that's available on youtube you can watch it and have a laugh uh computer was you that uses a um a neural a recurrent neural network was basically utilized in this sense and they fed it a lot of speeches from from trump transcripts from trump's speeches and what the computer tried to do is it tried to generate a fake um trump speech and and that's what its generational capabilities was so you can listen to this uh I think that you know the person delivering this is fantastic because he does a very good impression of trump because uh what really comes out of the computer sometimes is a bit nonsensic it doesn't exactly um I think donald trump makes a lot more sense but uh when he talks but but but yeah it's it's funny you can have a laugh there are some um nuances that seem to mimic the way he talks and they they adjust its creative output so um they make some alterations to to give it more room to be more creative in its speeches and then towards the end you will start seeing that it's actually spoken a lot nonsense but um yeah do check this out on youtube and when I think of the way AI is depicted in the media um in games like uh Detroit Human uh Becoming Human was an interesting uh was an interesting game that depicted AI one of the scenes that struck me most was of this android called Marcus you know he'd been he'd been living with this artist all for as long as he was purchased and you know this guy asked him to draw something out of his own you know to be creative to to to do a painting that he had never seen before and he was able to do that and it was amazing to me I think daters tried this a lot of times also but I think that creativity is built on existing knowledge existing ideas existing um visualized visualizations of art and that's how we generate these new um these these new creative pieces that we haven't seen before and I think that computers today already have the potential to do that and so maybe it's not so far away that these scenes that you see in media media it is so well um illustrates the potential of AI you know these realizations are possible and with that I'd like to say um yeah thank you I hope this presentation has been insightful I hope it's given you a basic understanding of of you know where we start with AI and where it can lead us and yeah I hope that when you see these uh systems like cyborgs androids and all that in media you have some basic understanding of how the how these architectures are built where they start and yeah I hope you've enjoyed my presentation it's a hologram based on similar AI algorithms and biases in this game well it it depends on what part of the hologram you are discussing um so for example Shiloh um if we looked at um if we were talking about a hologram the way a hologram is depicted the construction of a hologram what the hologram looks like could be very much based on our own biases because it could be constructed based on what people think is an attractive face what is an attractive um shape of a body you know those sort of things so the visual depiction of a hologram a person that that could be if that is generated by say a general generative adversarial network then what that looks like ultimately is dependent on the data that we've input so if for example you know there are how many of us in this uh lecture theater about z9 two three four five six well I mean yeah if if you if a lot of avatars look like me right like mine and you fed that into a computer system and you told that to and you told it to generate a new holographic image based on based on uh this is what humans look like and if they look like me the the ultimate the ultimately the generator product would look a lot like me chances are it wouldn't be female so based on that data set the generation of a hologram that that figure would look a lot like the images that you've input um if you're talking about biases and discriminated discriminatory discriminatory decisions then there are various features uh there are various sort of technologies that are that are used by the hologram for example it's able to see you right it's able to see a person it's able to hear what a person is saying it's able to transcribe that into text and then from the text it's able to extract certain certain things and then it's able to give responses if you said that if a certain if you gave it data like um for these sort of buzzwords or these sort of sentences that people have said um you know it's they are most likely to trigger anger so uh then then you look into um someone's messages uh right you look into messages that are sent by people to each other and then you you look at them and then you say that oh for these text messages it seems as though someone is responding angrily and then you tell that to a computer so a computer will look for these these sort of features in text these um the words that they've used the phrases that they've used and apply them and then it would apply a bias so the bias would be if it hears if it hears this text it's going to get angry i know it sounds very abstract it's very hard to explain this but but the point is when when we look at a complex system like a hologram an android um it's important for us to break these down into their different components components their different features um responses to audio responses to images responses to touch and uh yeah it's it's not just one algorithm that's applied it's it's a lot of functions and a lot of a lot of um a lot of data has to be used a lot of different components that it's better to break an engineering problem down into it's it's different parts and then talk about biases and discriminating discriminatory um decisions than to look at it at a whole yeah uh you know what negative information is something i have a problem with in in the media and it's also something i want to complain about um you know when we talk about our science circle discussions i'm sorry to say this but at our fireside chats you know we we've always tended to end up talking about negative stuff in the media negative stuff that's coming out in the news in in newspapers and all that um when you open it up the first thing you see is a lot of articles talking about uh you know the covid situation and this this minister is making that decision and this murder was committed it's so much negative news that's coming out today and i was thinking i was just thinking about the skynet situation in terminator you know it decided that it was it had access to information that was presented in the media right in news that's proliferated through the internet that's proliferated through social media i wouldn't be surprised that seeing all this information and and looking you know looking at it closely and looking at what people were posting and what was coming out that the human race was basically doomed for disaster and then then making the decision that it was better to just nuke the planet um yeah it's just the um yeah it's it's just how how much how much of that is available but of course uh shantel if you're talking about how it responds to information that's that's also different for example as a human you might have a different emotional response to something that was negative and something that was positive maybe something that was negative gives you a stronger emotional response than something that is positive maybe you skim over those positive articles about um you know this guy made a donation to hospital or something like that you know you you have your own biases and so obviously computer systems they they can be trained the weights can be adjusted the weight is a very important thing um so let's say you want to give a positive news article more weight than a negative news article then a computer might might be able to adjust its biases accordingly so i think that it's possible to to sort of manually tune uh the weights that are used in these algorithms to make it to make a computer respond to positive information more strongly than negative information i i did want to think uh i i have been doing a lot of thought about this about what's presented in the media what's on the internet and what a deep learning algorithm that has access to so much information so much big data data through a network a vast network you know would have access to but that also gets me thinking about um exactly what it is we are exposing our next generation through you know the generations that are exposed to media so much so much media proliferation through instagram to facebook through um you know snapchat or whatever it is people that are using these tiktok whatever it is people are using these days that exposure you know if we think about ourselves as neural networks as computer computers that are capable of picking up on biases and all that then what we are exposing the next generation to on social media and in our discussions at the science circle at our fireside set chats it's very important for us to consider this because if we feed them negatives if we keep discussing the neck the negatives i know somebody said something about um the military being you know private and and and all that yeah or people people generally being aloof about the environment people are like this people are like that that influences the biases that the next generation brings on you know how they treat each other how they treat other people how they treat problems that they may face and so it's it's very important just as a data scientist as a data scientist it's important for me to think about the data i'm feeding a computer it's very important for us to think about the the things that we are teaching and yeah i hope that through my presentations of and i i have a lot to learn about presenting and all that but through my presentations of technologies of ideas of you know things that we have developed developed human ingenuity i hope to point out that there is a lot of potential for us to address problems that are existing today and it's accessible it's something that we can play with something that you're never too it's never too late to learn how these things are applied and it's actually yeah i just hope that as we go on through these talks over the course of the year i suppose my resolution is to try and make induce some positivity in our discussions and our talks and eventually our thought processes are there any other questions any feedback any thoughts do you think i could have been a lot more smoother yeah some of it is a bit of a repeat the the deep learning portion i know somebody has has given thank you thank you so much but i do hope we get more students participating in science circle talks eventually i'm not sure if there are any graduate students or anything here now but yeah that it's a good opportunity for us to expand thank you this has been a privilege as usual um so i i hope that we'll have more talks about ai and programs and computer science and and things like that as we go on um but yeah thank you very much everyone for coming uh for taking up your your evenings i i hope you sleep well tonight um not thinking about the dangerous potential of artificial intelligence that is but um yeah i think with with some idea some notion of how you know how much ingenuity that we possess that we've displayed over the past how much more we have to to show how much potential we have to solve the problems that going on around us maybe my next talk should be about the practical applications of all of these you know um how it's being used in medical research how it's being used in uh demographics and um policy deduction and yeah it would be quite interesting to have this but yeah thank you thank you so much i will turn off my voice