 Which has been postulated which is one of the original ways that was being postulated was that we capture the sum total of all human knowledge And put it into systems that could then be able to utilize that knowledge to come up with How it how it would perceive and deal with the world now the problem there are fundamental problems with this Okay, so first thing is you don't know what you know a lot of our knowledge is implicit. It's not explicit. You don't The fact that I am Anand is fine, but what constitutes a human body? What is a face so on and so forth that things that you have learned growing up? It's not something that you have assigned labels necessarily So there's a lot of knowledge that is extremely implicit in how we perceive and deal with the world. So this isn't captured If you try to dump the sum total of all human knowledge if that were even possible and we had a large enough database Now the second problem with this is that we're extremely delusional You don't actually know what you're seeing what you're seeing is an illusion created by your brain out of incomplete parts. So An example I'm pretty fond of repeating so unfortunately we have ever heard me talk before I'm bear with me You have three red pixels racing right at you What are they? It's hard to answer that question when you don't have enough information about the the context add Context it's a cricket stadium You all know what the answer is add context You're in the middle of a food fight, you know exactly what it is now The context allows you to understand what those three pixels could be extremely Insufficient low-resolution information can still be deciphered because of the context and you are expecting something like that now what do you do about that? Well, that's Yeah, so that's not necessarily Going to be very different of a cricket ball or an apple is rushing at your head. You probably need to duck anyways so The third part which I've already alluded to is that there's simply too much knowledge out there You can't possibly capture everything the sum total of our knowledge well Forget our knowledge our knowledge itself is a subset of reality So we simply can't capture all that information So if you try to build a system that is Dependent on the amount of knowledge you put into this you're not going to get anywhere Okay, so fine. So let's assume that's not what how you're going to go about building a brain So the second way you could do it is say hey, I know humans are intelligence humans have brains brains seem to be the seat of the mind and intelligence Let's assume. That's not controversial Why don't we study brains why can't why don't we dissect everything that is to know study every connection every neuron in your brain and Actually recreate it would that constitute a thinking machine Again highly unlikely. So the numbers are not in our favor You have on the order of 80 billion neurons in your brain trillions of connections and these connections. So these connections are the seat of memory If nobody goes into that kind of detail, I'll probably cover some of it tomorrow in the workshop, but those connections that hold Your memories trillions of those connections are About one micron in size And the gap between two two cells that that the synapse makes is 10 nanometers We don't have the tools That are that can even scratch at being able to record what's what actual content is there in these trailing connections. So So that doesn't sound very likely But let's just say you can actually read brain states and you can actually try and With all the neuroscience resources on this planet you figure out a lot about how what's actually in a brain How do you actually go about building it? How how faithful are you going to be in your reproduction of that brain? Are you going to model neurons? Are you going to borrow a model networks? Are you going to borrow model ion channels or are you going to go down to the quantum level every single researcher out There has a very different answer as to how how important these different levels are if you if you take somebody like The brain project in Europe and Henry Markham and what he does He believes that you need to study a lot of detail down to the ion channel levels Model them in extremely excruciating details Maybe model 10,000 neurons at a time on a supercomputer and maybe that will give you sufficient information about a Micro-secretary mic the micro-secretary in a brain. He may be right but we don't know and so We we but that does seem to be the only way to go forward We have to build models. We have to build lots of models We have to try a lot of different variations on how we think brains are built how machines can't think and And through the failures through the successes. We actually end up capturing a lot of this okay, so given that backdrop, let's see how we actually did it in history and For the next few slides at least I'll start talking a little bit about what has happened again. This is a very Brief history and and we'll have a lot of holes. So forgive me for that But let's start off with something fairly recent. So in the 1940s McCulloch and pits Suggested that you could capture a brain's neuron by a very simple model so We know that neurons in the brain have essentially three major components They have dendrites the dendrites are like pre-like structures that gather inputs So you have inputs that's represented over here and then these dendrites Modulate those inputs. So you have weights so that they can modulate them and then you you sum up all of that in the cell body called a soma and And then you apply some kind of a nonlinear function and you get the output when you're on now This is actually a pretty good model of a neuron it of course it glosses over all the Nonlinearities and all the ion channels and extreme complexities that go into what is actually in a biophysical neuron but it does capture the overall goal neurons do integrate information from multiple inputs and they summit and Say hey, I think all my inputs were sufficient. They crossed the threshold. I can fire and pass on that information and This led to eventually Rosenblatt and stuff Coining the term perceptron to describe this kind of a model and the world was excited The world was extremely excited now. We could actually build lots of neural networks. These have the potential to actually Do a lot of computation What can they actually do? Well, you put a neural network You connect a lot of these things together and you can actually start doing a little bit of classification You have inputs that are different They get integrated across these neurons and you can tune those weights To be able to actually Differentiate between different classes of input. Okay, so just a little bit of jargon over here in Any high-dimensional input if you want to be able to separate them out into Classes what you do is you you use n minus one dimensions in something to construct was something called a hyperplane So if you're two dimensions your hyperplane is essentially just a linear line so these linear neurons essentially could end up doing a lot of classification and Everybody was ecstatic Okay, they were so ecstatic that based on Rosenblatt's comments The New York Times essentially said hey, we're going to be able to create conscious entities You have and not using the neural network type of approach You had the other side of AI also making similar claims saying hey We're going to be able to build machines that can think within 20 years Humans forget it. You're going to build much more intelligent beings All of this happened in the 50s and 60s now As you can imagine their predictions were way off. They simply did not grasp how difficult the problem is Everything I said in the first couple of slides saying talking about how difficult this problem is Some or slip them These are very smart people. So it's not necessarily that they didn't think about the problems. They were just overly optimistic now Minsky Famously along with Papert. I think that's a pronunciation published a book on perceptrons Which was which is pretty good. He he Illustrated the potential of perceptrons But then he did a little bit of disservice when when he started dissing the perceptron and what it could do But one thing he did observe there and while there's a lot of controversy about what Minsky said about perceptrons and whether that is Responsible for the first AI winter I'm going to add one extra thing Minsky also pointed out that sure Let's say you can actually build these wonderful neural networks and do all of this computation You don't have enough computational power to actually build anything useful And I think that also severely Contributed to the first AI winter So for the next Decade all funding dried up DARPA contracts were cancelled Everybody in this planet was pretty much annoyed with all of everybody who's claimed that they were doing AI and Research came to a grinding heart. So you had so and remember all of this is based on something as simple as a perceptron Whose weights were handcrafted? They were assigned to solve problems and all the problems all but toy problems well While While there's not entirely chronologically accurate So I'm going to take a little liberty over there but while the neuroscientist the while the artificial intelligence and machine learning neural networks kind of people were Wallowing in their misery of about their dried up funding. Let's see what the neuroscience world was doing So the perceptron was one abstraction one Representation of what a neuron could be But that's not the only thing you can actually learn from the brain and the neuroscientist continued chugging along and there were a few things that Stand out in these intervening years that they observed that will come back when we talk about the resurgence of machine learning So first thing it's it's a huge mouthful, but this is essentially Hebs postulate so Donald have in case you haven't heard his name You should the word heavy and learning comes from his name He observed that And this is a very bad paraphrasing, but the world uses it a lot cells that wire to have fired together wire together That's that's the bad paraphrasing of this sentence that he actually did. So what he observed was your brain was actually trying to figure out causality and Neural networks and neurons in particular were looking for other neurons that could reflect their view of the world and trying to find correlations and associations across those things this This heavy and learning has been the cornerstone of a lot of neuroscience research since then it still drives a lot of the machine learning Work that's happening. Now another observation, which is also pretty critical I think was Hubel and Weasel in the 60s making recordings from cat brains I think it was cat brains. I think they recorded from a bunch of animals But let's let's just talk about cat brains and they were recording from the primary visual cortex of these animals And what they observed when they presented visual stimulus stimulus to anesthetize cats, sorry Was that the cells responded in two ways? There were a class of cells that's simply Integrated and responded to what was underlying them. So if you think of a neuron in the brain and The whole of visual space that it's responding to they responded to a tiny portion in visual space and They actually summed up the inputs that impinged your retina and the result of this was very simple cells that were responsive to edges Bars of light and spots and so on it's for simple cells. Okay, that's fine In machine learning parlance. That's a feature descriptor okay now Or that's a feature that you've described. Anyways, they observed a second class of cells These are called complex cells. Now what the complex cells seem to do was they Were integrating a lot of simple cells Now this is important because if you have a whole bunch of simple cells that are responding to vertical lines And they're near each other but they're slightly displaced in space in the brain and Correspondingly displaced in visual space as to which region of visual space they're responding to and you combine these inputs It means that whether the bar was here or here or here This complex cell would be responding to it Now we go back to the themes of what I said were important. This is essentially spatial invariance Okay, so these cells are now responding to bars of light regardless of where exactly they happen Okay, we'll come back to this but Fine I'll give away some of the story already This is the harbinger of pooling which a lot of you are familiar with so we'll get to that in a bit Okay, so meanwhile some other guys were trying to put together larger networks Okay, so these guys were trying to build these networks and I'm just pointing out two of them Which stand out to me one was called the neo-cognitron Okay, so the neo-cognitron regardless of the name and And the intentions behind that name Had a layered architecture that was based on what Hubel and Wiesel observed. Okay, so what they had was if you have input That's visual space You had a layer of simple cells followed by a layer of Yeah, so wait, yeah, so the first thing is probably the contrast extraction is the equivalent of a retina Let's skip past that you have a layer of simple cells that are doing things like edge extraction and stuff Then you have a complex layer which is giving you slightly Invariant representations of those same features and then you have another pair of those and so one and so forth Cascading these layers together. Okay, and you build a classifier on the top layer and see whether it can recognize things so this is pretty cool and For the for those of you who recognize this architecture This is a convolutional net We'll come to that again in the end So we're gonna bind this all together, but I just want I will keep giving away the the clues if they're not obvious But everything was done. There were there were lots of theory lots of Experiments that have laid the foundation for modern machine learning. I'm just pointing those out because The the lesson to be taken is that neuroscience is continuing to do this Neuroscientists out there that are I don't know 100,000 maybe 50 to 100,000 neuroscientists out there in the world that are doing fundamental research That's contributing things like this and these are the things that are being exploited and being You know reuse in the world of machine learning with very good reason. They're good and they work Okay, so another Network which I really like is called the Hopfield network. Well Hopfield wasn't the first person to actually build these kind of networks He he became associated with these networks a lot and in and so John Hopfield built these Networks that actually had a whole bunch of recurrent connections so this is one of the first recurrent neural networks that was out there and and he found that it had a lot of Application potential applications in associative learning these networks Depending on how they were configured were capable of associating patterns that came in and ending up with stable states that represented The memories stored in them and quite a handful We can go into greater detail about these things offline if you want any level of detail and you're feeling lost Talk to me later and also if some of you are hitting the workshop tomorrow I'll be running the vision workshop so you can also save save your questions for that as well because I'll be available for a whole four hours, right? Okay, so You have cons nets and you have RNNs done in the 80s Examples of them that kind of worked and demonstrated pretty cool features Okay, and then It was back machine learning came back It the world celebrated again with the invention of the back propagation algorithm I mentioned that those perceptron layers were all handcrafted, right? So obviously not scalable so the so when Somebody came up with a rule that would allow you to actually train that entire set of weights to to map out all those input Output functions people got excited again. So you now had a Mathematical basis for training multiple multi-layer perceptrons that could now learn extremely complex input output functions Sounds great Everybody started investing in artificial neural networks. You had a huge resurgence and It fell flat again because again Fortunately so multiple reasons, but I would like to attribute one major reason you still did not have enough compute power To be able to run large enough networks to make a difference. You're still running toy problems. Sure. They made a few Few things that actually worked maybe in handwriting recognition and so on and forth But you couldn't get past that there was a huge bottleneck You didn't have large enough machines know not enough RAM not enough compute power you still can't build large enough networks to make this useful and Not enough data. We'll get to that separately But so this essentially led to a little bit of interest it flat up again But subsided now the difference between the first AI winter in the second one was things didn't just go quietly Because there were lots of intelligent people who who recognized this for its power and continued doing Very credible research. So again, I can't do justice to all the different Gains that happen in this period But despite the lack of compute power a lot of very good people reassembled under the term machine learning and and did a lot of good work. Okay, so I'll point out a few things and Lynette is of importance because Lynette is Was actually said done for handwriting recognition and you have Yan le gun to thank for the le net net I think And for people who don't recognize his name. He runs Facebook's AI today Support vector machines. Now these were very interesting. So support vector machines came at a time that was perfect neural networks couldn't really do too much and and so support vector machines had been around so a Vapnik Vapnik was one of the guys who invented them invented them a little while ago in the 60s or 70s But in the in the 90s what happened was the They came up with something called a Colonel trick. So when I pointed out that you could you could draw these Simple hyperplanes to separate out the data What would happen when your data was a lot more complex like let's say it was a donut hole shaped thing It's a very common thing if you Google it up You will find just Google up Colonel trick and you will find these Data sets that are very had hard to classify by putting a linear separator between them Okay, but but what made support vectors really powerful was in addition to being very good at doing linear classification if you applied these Colonel tricks you could transform the data essentially into a different Space a different manifold that would allow you to then put lean linear classifiers and separate out your data So that was really cool and it Got a lot of popularity. In fact people still use a lot of those kind of things Skip a few things a lot of self-organizing map related stuff came out of neuroscience Again, let's keep them. Everybody is probably heard of sift Sift was again based on observations about the visual human visual system The the nature of the receptive fields and all that kind of stuff in the visual cortex got translated into Gabor pyramids and all all of those kind of stuff and you you and one of the consequences is sift It it's been the mainstay for a lot of object recognition for a while I have done complete injustice to my fellow NLP related guys, but sorry, I'm a vision person. So I have a very I have extreme bias towards thinking about Vision related problems, but those guys have been chugging along They were not necessarily affected by the lack of data or compute power because they could they could actually make do with What was there and a lot of good work, especially in recurrent neural networks happened during this period Culminating in LSTMs, which I think were invented in around the 97 I forget the names of the guys again. Sorry my bias not an LLP guy but LSTMs are still state-of-the-art in in a lot of this kind of in in a lot of NLP research, so That's fine. So the interlude basically had steady progress people were making very good models But doing it with the limited resources that that were there And then of course you have your sponsors over here who had GPUs and they release CUDA. Okay, so this is really cool. So finally you had compute power This is exciting Graphics cards they observed was were extremely good at doing all of these actions in parallel and Neural networks seem to be an embarrassingly parallel problem that you could shove onto a new onto a graphics card And even the first versions of them were doing wonders. So I have the good fortune of having Somebody named Costa Colbert as one of the founding members of Math Street then So Costa anecdotally, and this is interesting Costa was part of a company named Evolve machines in in Palo Alto and Evolve machines was like the second company ever listed in Nvidia using CUDA to to build a neural computation engine. So So When these guys came out with this starting in the in the mid-2000s, I can't put an exact date to it People saw the value of it and started adopting it Okay, and one of the people who adopted it was a guy named Jeffrey Hinton Okay, and if you don't recognize the name he kind of heads Google's AI Now Jeffrey Hinton Took all that compute power Took those neural networks and said, yeah, I can train them, but they're not converging very well Okay, so he contributed something Called a deep belief network. I'm sure he's not fond of deep belief networks the name But let me just explain what he did what he observed was With a limited set of data Despite all this computational power that you had you really couldn't train your neural networks to converge very well There were it was very hard to do so. So instead He pre-trained those networks using these things called restricted Boltzmann machines Again a lot more detail tomorrow in the workshop. I'll keep it short The idea was that he built Generative networks networks that were very good at representing their own inputs and he trained them in a greedy fashion That set up these networks so that they could converge very well when Subsequently trained with backpropagation and all the computer resources that NVIDIA GPU is provided to them Okay, so that was great. It and so he had a lot of success with that and in particular These rbms turned out to be very good at building auto encoders. So you could you could train an an rbm Network Unroll it and since it's generative you could actually use it to train an auto encoder and train it with backprop So some of the applications a lot of the applications were mostly hinged on these kind of things now, okay So as I mentioned He found that this was very effective because he now had the GPUs and he had The algorithms that could be used and thrown at these to solve these classification problems But what he found and what a bunch of other people found was that if you had a ton of data as well You actually didn't even need to pre-train the network and so came So basic and and this is essentially the world of Google today You have a ton of data out there So you do have the data you now have the algorithms and you have the compute engine What can you actually build with it? Now if you take these networks? the traditional neural networks that you're that you're working with these tend to be fully connected so Yeah, over here. So the idea is that every cell is connected to every other cell in the next layer and you try to compute all of these weight Metrists that will actually give you the The computation that you need problem is you'll quickly run out of memory and as as good as in media GPUs are they do have limited memory. So you can't train absurdly large networks Using this kind of technology. So flash back to the neocognitron Where you could use convolutions instead. So there's a there's an observation that's been made when you study a lot of these things, which is When you are processing information, particularly visual, I'm sure the NLP guys will Will be able to contribute to that discussion on the other front But when you're talking about visual information, there is a very high spatial correlation amongst inputs that are right next to each other If that weren't so I wouldn't be able to make sense out of the scene Out of this scene in front of me right now, right each one of you is spatially localized It's not as if parts of you are split across my entire visual scene that I have to actually combine this information So if that is true, why do I need a weight matrix that actually combines all of these? I can actually keep a very tiny weight matrix that's looking at a tiny subset of visual space okay, so Assume that's true if you can do that then what you can do is Analyze if you can train a whole bunch of these tiny little receptive fields and the observations that come out of that from both Neuroscience and if you try to train a lot of these is that you will find a lot of repeated patterns If I want to identify people on my left or to my right the kind of receptive fields the kind of feature detectors I need are going to be identical people don't change just because they're on my the left of my visual space on the right I'll still need edge detectors I'll still need spot detectors so on and so forth so The filter that you that you have just shrunk and reduced its memory requirement is actually the same that you need across the whole board Voila, you have a convolution so you have a filter bank that essentially captures one type of parameter and does does and Tries to see whether an entire scene has these features by convolving it So the the benefits from this Buries low memory footprint because the only thing you need to store are the parameters of that convolution The convolutional filter and so now you can train extremely large networks have lots of feature descriptors and Turn it through now your GPUs and train it on a ton of data Which Google pro and others provide and you now have state-of-the-art? Neural networks and the example over here in case you don't recognize the name His full name is Alex Krzebski And this is essentially your Alex net for the people who have heard of that. This is That's the origin of your Alex net. I'm assuming it's named after him. Yes. Okay. Sure. So so the Alex net came out of Hinton's lab in 2012 and took on these Challenges that the world was throwing at them. So you had all these image recognition recognition challenges and blew it out of the water and that Maybe not exactly that but things like that including the Alex net winning the competition in that particular year Got the whole world interested in deep convolutional nets and It hasn't looked back and the whole world essentially uses a ton of these to do a lot of different things and the interesting part for most people even if you're not into Deep learning per se is the huge plethora of applications that can then be addressed by these kind of classifiers that now you can train reliably well Reasonably reliably extremely fast and when I say extremely fast we've gone from years to months to weeks On a ton of data so that you don't need to necessarily Handcraft those features or do anything it can probably figure out the representation in by itself when you have enough data And it does pretty well. How well does it do it beats humans at specific tasks? So the image net challenge Is about doing a thousand class Classifier essentially if you put a human against one of these networks not necessarily the Alex net but the But the subsequent winners in following years They probably beat most of us in this auditorium You need to be a very special human to be able to beat one of these things. However, they still don't drive your cars By themselves if it's it's not just about object recognition. It's not just about language processing You need to assemble all of these subsystems to actually make something useful out of it I'll leave you with one last thought The progress in the field is about not just exploiting and building deeper nets It's about how do you assemble these things these subsystems as I mentioned and make more interesting things So an example is is is an application of image captioning Where this has been done fairly successfully So if you're trying to if you're trying to describe this image, it's not a classification problem by itself You also need to have a grammar that represents what's actually in here So there's been some pretty good examples of using the power of deep CNN to do the image classification related parts and then assemble the words the labels that come out of that and pass it through Very powerful networks recurrent neural networks that the NLP guys have been creating and now you have some pretty credible captioning Networks that can do this. So this is just the beginning This is just a start the whole world is ahead of Anybody who actually wants to get into deep learning and hopefully go beyond deep learning and do things Start extracting more from the world of neuroscience as I mentioned everything you've seen here has come out of studying the brain and trying to model it So I would hope at least a portion of you would try to do that as we try to do at Math Street then and And continue the development of machine learning So I'll stop over here and Open it up. If you have any questions Actually, this is Siva Very sorry So, I mean my understanding from this talk is We it's a lot of image classification and Predicting the actions from an image that perspective Could you speak a bit louder? I can't hear a lot of Image classification the actions the patterns recognition or those of it which are already there That's one part. My question is how how far how different is There's deep learning in terms of machine learning like is there an application of deep learning in a banking domain? just trying to see or it's only limited to image and image processing or In terms of how would you say? Deep learning versus machine learning part in terms of applications. I mean Sure, so I'm not going to answer that completely. There's a whole bunch of Very good speakers who are lined up. We're going to be talking about applications in different domains and stuff like so Again, I repeat this this overview is biased by the fact that I work on vision Primarily, but there are lots of other domains in which Deep learning can be used to sift through the patterns that are applicable in that. Is that essentially what you were asking about? Yeah, yeah, I think probably other sessions also will give a little bit more clarity and another question is Can I assume deep learning as the next level of image processing or computer vision topic? like I've seen a bit of this shift and edge detection all this stuff, but not gone beyond that in terms of Really trying out. So is it something that the next if you're if you wanted to do advanced computer vision stuff If you wanted to do advanced computer vision. Yes, that topic. It's closely correlated to deep learning. So Rather than putting it that way, I'll rephrase it and say deep learning techniques Currently who are the reigning champions on computer vision related tasks? Thanks to their power. So they have beaten the the reigning champions of sift and so on and so forth by By how they've been applied It's not to say that you can't build hybrid networks to take what you've learned using sift and finding You know a hybrid that actually takes it beyond where it is right now But yes, if you want if you are serious about computer vision research, you do have to go through deep learning currently Thank you very much. Thanks Sorry, I can't could you speak up keep it closer. Yeah, my name is Rajesh. I'm from the dealer team. We're a Chennai based consulting firm my two-part question actually first question is Do you envision moving from a special purpose system like the ones that we talked about like you know image recognition image processing To a general purpose and what does that paradigm shift require in terms of Right What would you require to actually make it more generalizable now? This is there's no easy answer to that and part of it would have to be with the creation of new Networks and network architectures that are capable of generalizing better than these are but that's kind of a cop-out, right? if Taken taking the existing networks themselves What needs to be done is to Figure out how you could actually assemble a system that is that that builds on Previous system capabilities. Okay, so being a little way weird vague over here. Okay, so You have the backbone of a network that at its fundamental level Passes through the raw data for you All of these networks all the winners of all these image competitions and so on and forth have done that work for you Now what you can do is use that as a template to actually piggyback on that and build systems that take part or or assemble Multiple versions of those in order to to make it more generalizable. The problem is that's not intuitively satisfying you would rather have the generalization be inherent in the architecture and Going back to my first answer. That's still an avenue of research Sorry, could you repeat and speak up a little I can't hear you so when you so sometime back you just said When you improve a system and then it gets better at what it what it is used to doing what it's supposed to be doing You improve its efficacy or you improve its efficiency at doing one thing But then when you spread that effort out across a range of systems, then they'd have to work together Different Yes, sure so how networks would communicate with each other if you take the last example I showed you guys where you've essentially slapped an RNN on top of a CNN. That's not networks working with each other That's taking the output of a CNN feeding it into another network. That is good at dealing with labels now if Your your captioning system was able to actually feed back a little context and tweak how the Visual network was actually performing so that the labels that came out were more relevant Now you're talking and it might have a lot more power in that So it's not just assembly of disparate parts and then you try to figure out how it works You have to actually figure out how they actually merge together and and there are people trying to do that and it's But it's not something that's Bone fruit yet Just wondering because computer vision is and machine learning are getting so Closely tied with each other. Is it the purview of Google's and Facebook's alone? Who can afford the GPUs the massive data the training sets? the investment that is required to that goes into training or is there You know a larger group of companies or people who could find niches and doing Things of value Absolutely, so This is this is not just a purview of Google or Facebook sure. They have a ton of data Sure, they have a lot of money to spend on GPUs, but GPUs are not that expensive the folks at the good folks at Nvidia have made them fairly cheap you can actually build a GPU cluster with The your gaming cards you don't necessarily have to use Teslas unless you're deploying Deploying them. It's not just about the cost of the GPU What is the value that you get out of out of the use case that you have right now? You know very few use cases of immense value are being Handled right so while the cost of the hardware will go down. That's a bunch of you know Some sort of a long tail of use cases. Yes You're not getting attacked correct and that is the world of the startup Absolutely, that is what startups need to be tackling. You don't go head on with Google But Google can't cover the world neither can Facebook. There are a plethora of use cases that are out there that will Quite frankly Google will not even bother to try and solve those problems because they're too tiny for Google's attention But that could still be a hundred million dollar company right out of solving those use cases. So The resources are there in terms of compute power it a GPU instance is not that expensive and You you have access to a lot of them. Even if you don't have a physical cluster at your space You have all of these cloud providers now having GPU instances now With all due respect to Nvidia Distributed computer so parallel computing is going to get cheaper and lower power and more efficient By going past GPUs by using other technology, so You can take it offline, but there are lots of other things coming up in the realm of neuromorphic engineering and Stuff like that which will actually bring down the cost of all of this much more now So you have compute power. You don't necessarily have the same amount of data that Google does But that's where you have to be smart about it But if you do need to go and collect data, it's not as if it's an impossible impossible or Herculean task Sure, there are proxies. There is crowdsourcing of data if you want things labeled And maybe it is our job to figure out how to make do with less And so that might actually also drive the path towards innovation build hybrid networks that bootstrap on the Giants So but then build it in a way that uses very little data And so those are kind of some of the things that we do in our company as well and plenty of others You