 Hi all, I'm Anand Chandra Shekran. I'm the CEO of an artificial intelligence and computer vision company called Math Street then. We're based out of Madras and the Bay Area but I'm not here to talk about the company today. I'm here to talk to you about something that I used to do when I was a postdoc at Stanford and it's and what I'm hoping to cover over here is a very brief introduction into a field called neuromorphic engineering and I'll hopefully convince you that it is one of the things that you should be paying attention to and it's of relevance to the world. So we heard Sailesh bring up some very interesting points in the previous session and some of those had have some very important implications to the way we build the architectures of how the intelligence we're delivering to the world comes about. Deep learning as he pointed out has some inherent flaws in it and I tend to agree with them. You don't want you don't need to be training things with a million images to actually come up with useful and generalized intelligence and neuromorphic engineering isn't directly concerned with that except that it's in its underpinnings it draws inspiration from neuroscience where we know that we don't learn that way. We know that as humans we don't learn by training on a million images. So hopefully this this is a bridge into answering some of his questions the questions that he raised and poses extra relevance because of that. Okay so about the title of the talk so that's that's actually fairly straightforward. Moore's law most of you guys are aware of it expresses this observation that was made a long time ago that and I'm paraphrasing here that compute capability will keep doubling every 18 months or something like that. Interestingly this this term Moore's law was coined by a person named Kaurav Mead who went on to become the father of neuromorphic engineering and his lab in Caltech pretty much spawned every professor out there who does neuromorphic engineering today and so we are fast approaching certain fundamental limits in physics and so Moore's law is getting harder and harder to maintain. It's more of a wish it's something that the world is collectively trying to keep true because we see the benefits of doubling our compute capability but we're going we've seen the staggering costs that it entails that that has to be put into building the next level of technology. Every time we try to decrease the process size you're talking about billions and billions of dollars invested into that and only companies like Intel and IBM can actually do that. Well there are alternatives to that if you think about the goal or the hope behind Moore's law which is about doing more with every passing year then what we need to be doing is looking at alternate architectures thinking in parallel rather than series not worrying about von Neumann architectures and working on neuromorphic architectures instead for example. So I'm going to cover some very basics to make sure we're all speaking the same jargon and and most of you are familiar with some of these since the underlying principles over here are neuroscience so I'm just giving a brief primer. We can very simply think of a neuron in your brain as a fundamental unit of computation it doesn't have to be you guys can argue with me about whether that's philosophically true but for convenience sake let's just consider the neuron to be the fundamental unit of computation that happens in the brain. We have our analogy most people in this audience are familiar with these in machine learning you either call it a neuron you call it a convolutional filter a feature extractor whatever you call it it it shares its basic capabilities. You have a cell body which has these things called dendrites the dendrites essentially serve as a filtering mechanism through which you collect input from other computing elements. Those things are then passed through some kind of non-linearity essentially the same over there which allows the computing element to decide whether to pass on that information or not and that information is then transmitted through the brain using these long cables extremely long wires called axons that take the signal which is essentially digital and pass it along to other compute elements and the process is repeated. This shares the perfect analogy with the kind of basic compute elements that we have in machine learning today. On a separate level what do you do with these connections how are they actually connected there is an analogy over there as well. In your brain this picture over here is an image where a very tiny subsection of a cross section of your brain has been labeled so these green little things are individual neurons and what you see is in a complex and elaborate pattern where their dendrites have a very particular structure. These are the things that filter the information which is computed the non-linearity sitting somewhere close to the cell body and then you get the output out. This is a familiar deep convolutional network kind of diagram over here so you have very similar analogies you can think of the width of your convolutional filter to be the same as your dendritic arbor or axonal arbor. You even have cells in the brain that are fully connected so these things match up pretty well but it's at this point that things start deviating because how this information is processed the architecture that goes to make use of this information deviates very rapidly and existing machine learning techniques are pretty much just scratching the surface. In fact most deep convolutional network for example is a purely feed-forward network and Neuroscience 101 tells us there's 10 times more feedback in the brain. So there are lots of deviations and the analogies break down but we're not here to discuss that or rather I'm not here to discuss that but what I would like to point out is that whatever is true for what we do with Neuroscience will be translatable. We can take what we do over there and apply it over here and so it will be relevant regardless of the philosophical underpinnings. So going into the crux of the matter now when we talk about scale of these systems in your human brain you have something like 100 billion neurons give or take a few billion I'm sure and these are connected with around a hundred trillion connections. Whether all of these are useful or not completely different discussion but that's the order of magnitude over here. Now when you talk about the scale over here in state of the art machine learning that's actually not that easy to answer because it actually depends on the architecture that's being used. So you have extremes. So at one extreme you have somebody like Henry Markerman in Europe who's bringing who's building extremely elaborate models of neurons so you have only 10,000 neurons but you still use supercomputers to compute that because they're extremely elaborately modeled up to the ion channel levels and at the other level you have Google and Baidu and Andrew Eng and everybody competing to build the largest deep learning networks every other year and regardless of which architecture which you're talking about you're off by orders of magnitude and scale. You're still talking about extremely small networks and while Silesh brought up a very important point saying that the size of the network is not important not necessarily important a lot a lot does depend on how big a system you can actually build. The number of things that you can do will obviously increase with the number of the size of the network that you build. Simple analogy from nature would be well we are the higher mammals so-called higher mammals we essentially have a much larger brain much more cortical regions a much larger cortical region that can support much more abstract associations which is what we would associate with higher forms of intelligence. So regardless of what you do and regardless of what architecture you're actually building what we have today is orders of magnitude off from what we have in biology but this doesn't really convey the picture as much as when you think about it in terms of power. This runs every one of your brain runs on 20 watts or give or take right it's 20 watts and to simulate anything even at that lower scale you're talking about mega watt range in power right so this is like a blue gene rack where probably an older blue gene which did a petaflop of computation you could probably use this to build something that's 100th the size of this in terms of computational units. We're still not talking about making them useful just talking about what you can fit in here that would run in relatively the same time scales and you're talking about six orders of magnitude difference in power okay so if you want to ever build an artificial intelligence you want to build a brain you want to put it in a robot chassis you can't cover you can't connect it to a hydroelectric dam to power it you're not going to be essentially you'll have one robot per earth or something like that right so this is completely not scalable and so we need to be learning something from here something about the architecture and something about how to build this in an efficient manner to ever bridge the gap in scale not talking about capability not talking about function just talking about scaling in power okay so well how do we do that and that is the field of neuromorphic engineering okay so starting the 80s carver mead has essentially been pioneering a lot of work where instead of simulating the brain you emulate the brain right so general purpose computing von Neumann architecture or GPUs of today it doesn't matter they can only simulate the brain so even though you have great advances in GPU technology and your clusters are capable of doing a lot more you're still so maybe you've reduced the order of magnitude by one but we were talking about six orders of magnitude difference in power a GPU cluster may save you maybe one one order of magnitude savings over here so you're not going to get there right so neuromorphic engineering on the other hand is about building custom silicon taking the architecture from the brain and morphing it onto silicon neuromorphic engineering right so I'm showing you two examples over here of two such systems that have come out in the last decade the first one neuro grid was built in Stanford in the lab of a guy named Quabana Bohan so I was very lucky to be part of the team that built it minor part but a part of the team that built this so what you're seeing over here is a 16 chip system you can see the size of it we're not talking about kilowatts or megawatts right so this 16 chip system emulates a million neurons with it with a billion connections yes I know it doesn't match up with the billions and trillions we were talking about but this is a stepping stone towards it and each of these chips essentially has transistor logic that allows you to emulate how neurons in the brain function okay so I'm not going to go into great details but just give you a teaser saying we can we can morph the analog properties of neurons in the brain by using sub threshold analog logic so this is the this is the part of the transistor physics and the IV curves that you guys are talking about which people usually throw away as leakage current okay so pico amp range currents which are usually not even well characterized so there's still a bit of sorry there's still a bit of an art form but you can harness those extremely tiny currents which have exponential math and and mimic the the kind of biophysical properties that a neuron have so what this ends up giving you is extremely low power transistor circuits that can emulate the nonlinearities that are captured by neurons in your brain okay so that's part of it and then to communicate this information we use asynchronous digital logic so as opposed to clocks that power all of your chips today all your CPUs there are no clocks on this chip but can you think about what would happen to a chip that that's running circuitry that's that uses pico amp currents and you have this massive chip wide clock that's going up and down you'd essentially have a clock it would completely screw up all this so the the communication infrastructure is using asynchronous digital logic and that's that's kind of the teaser into the kind of technology that goes into building such a chip so this new this neurogrid chip back in 2009 ten ish was at that time I believe the largest neuromorphic system ever built several years later we have this thing called true north which came out of IBM Alamedin so they have a group called the cognitive computing group over there so my fellow postdocs who actually built neurogrid a part of that team and they they built true north which is essentially 16 times the size so here you have each true north chip having a million transits million neurons essentially some differences in architecture some differences in technology used they they in fact don't even use the sub threshold analog the entire chip is digital but it it it's still a path towards building these kind of networks at and at scale and IBM is investing in it Qualcomm is investing in it there are several other big companies that are investing in it because they all see the value of these new architectures in actually pushing the envelope on how much compute can actually be done okay so there are a whole bunch of others such neuromorphic systems out there I obviously in an introductory talk we can't go into all of the details and all the flavors and variants there are but this is an active area of research in most of the rest of the world and we need to dump it up in India as well and that's part of my agenda for this talk is to drive a little bit extra interest into doing some of this ourselves because this is part of the future that's coming okay so for the so I've given you a brief introduction on why we need it power that's only one answer it's power and about what's out there using neurogrid true north the spinnaker there are a bunch of systems that's fine but I wanted to give you a little teaser into how such architectures are built so I'm gonna walk you through an exercise with some building blocks there are multiple ways to build neuromorphic systems but I wanted to walk you through an exercise in building one such system and use that as a template to give you an idea of what in what it entails to build this these kind of systems right and what it what is different about how they compute information so a reminder the structure of a neuron you have your filtering bodies the dendrites which collect information integrate it in the cell body the soma and then transmit that information to other computer computer units right so and this is again pictured out of neurogrid what ends up happening is they do the VLSI design that allows you to build the individual components that mimic all of these parts and put it together in like a metapixel and then a and tile it in huge chips which can then have a huge number of neurons now I'm gonna add one extra element over here this picture over here is our nano wires okay this comes out of HP labs they built something they that they dubbed the memrister the memrister is essentially a device that mimics the synaptic connections that you have when one neuron is connecting with another neuron through these axons what ends up happening at the very tip the connections is a release of chemical neurotransmitters that allows you to transduce that electrical signal now in neuromorphic architecture you could do that with CMOS you could actually just make circuits that do this but those circuits are going to be very expensive and very big if you want to capture capture have a lot of them remember that the number of connections which means the number of synapses is a thousand fold the number of actual compute units so we're talking about a lot of lot of connections and you're trying to fit that into this the CMOS architecture so memristers are potentially the way of the future so what these things allow are these nanowire junctions allow synapse like synaptic properties to to be expressed over here so they are capable of learning meaning weight changes just like you have in your back prop and machine learning and stuff those weight changes can be captured by altering the the electrical signals in the the the source and the target side so these things are capable of learning in a way that's analogous to the brain which and similarly machine learning weights and you can capture a lot of weights by by tiling a raise of these nanowire devices on top of the chip itself one such architecture so brief architecture you have your computational units down here they they are the ones that integrate and have the nonlinearity and all that kind of stuff you have your dendrites which are the filtering units the they are what transduce the signals that are coming in from the inputs and convert that into signals that can then be integrated and on top of that you have a fabric of connectivity all the connections from all the millions of billions of cells that you have have to somehow interact with this dendrite to clear to allow such information transmission so how is this information transmitted so what I explained over here is the integration part but there's also the transmission part right that's missing over here and I'd like to briefly touch upon that for several decades now the entire neuromorphic community has been using something called address event representation it's not unique to neuromorphic engineering but it's a very convenient way of actually transmitting data so this this pixel over here this single neuron essentially has to transmit information that pixel is styled in an array of neurons it's just like any layer of a neural network you have a whole bunch of neurons and you want this layer to communicate with another layer okay so the way you do it if you take the address of that pixel as I mentioned earlier when a neuron fires so essentially when you've you've integrated the information and it it gives you one essentially that the address of the cell that gave you that information is translated into into a packet that you can transmit to other such chips so neuromorphic chips or systems can communicate by just sending information when something happens so the event essentially the firing of a neuron the address of the neuron that actually fired address event representation is transmitted to another chip where it's delivered at pretty much the same well it can be delivered at the same location or with some logic into a larger arbor remember what I said this is the equivalent of your convolutional filter width right this is the region of interest for a cell at that target location so the effect of one neuron firing is fed by its arbor if you can just transmit it somewhere to the vicinity of that arbor and have this substrate that can transmit this information to the rest of the machinery the dendrites the soma so on and forth so this is the basic building block this is the this is the higher level how do chips communicate kind of architecture okay so I hope you got a feel for that now diving into that pixel let's look at what happens into that individual pixel and see how we can actually combine this information to come up with something so inside that pixel you have that cell body the nonlinearity for which you put put a bunch of transistor logic to emulate that you have your filtering transistors essentially they serve as the dendrites which are collecting information from all the input that coming through and then you have the equivalent of the source cells so these the the source cells axonal arbor where that address event representation the individual event actually arrives okay and similarly when the cell fires you just send it out of that soma so this is kind of like a zoom in of that previous slide now now I'm going to bring this together with the membrister logic how would you put membristers remember the nanowire synapses so that you can get high density connectivity along with the neuromorphic architecture like this to actually give you an immensely connected learning system okay so that's what we're going to go over in the next one or two slides okay so you have oops sorry you can have this crossbar this bar essentially represent one nanowire so one way to think about it is from your silicon substrate where you have received your address your address event you pump it up to the top of the chip and send it down a nanowire okay that's oriented in this particular direction similarly you bubble up a connection from the dendritic connections these are the guys that are going to collect that information straight to the top of the chip and have nanowires run through that connected to that okay the crossbar essentially gives you a synaptic connection okay and we're talking about nanometers here that's that's kind of what you're trying to buy here what I end up happening if this is how we'll end up looking so this is what happens with one dendritic crossbar and there's a one dendritic nanowire and one external nanowire meeting up over here you have one connection but what happened to your neighboring pixel and the neighboring pixel in every direction it'll end up looking up something like this okay so you have this crossbar array where your original so your your input cells have their target destination that allows you to bubble that signal up and spread it across a certain distance okay and you have your collecting wires the dendritic wires in blue which form a cross grid to start collecting this information from a whole bunch of axons now you take this at scale you essentially have a grid of these pixels and so the particular architecture in which I've laid out these wires is actually called a liquorif liquorif crossnet look it up if you're interested it's pretty cool so you have all your all your connections offset by one one location so you notice this particular crossbar doesn't touch this one so obviously you're not shorting connections here and so and and and a why and a nanowire from such external connections from way down way below the screen over here would end up terminating prior to hitting the dendritic crossbar so you have a grid of these alternating crossbars that don't intersect except at synaptic connections so you don't short them essentially so what this will end up doing is give you that axonal and dendritic orbit structure so to illustrate that you I'm going to point out at this one blue wire over here this blue wire ends up over here it came out of a dendrite of a particular cell this obviously intersects the axonal wire over here but doesn't intersect it over here which means each neighboring pixel is looking at a slightly shifted version of this of the input that's coming into the into the chip and this is essentially a non-overlapping convolutional filters that's the analogy would draw out of your machine learning kind of studies so the analogy comes back in right when you're trying to build your machine learning systems and you want to put them into architectures that are that can allow them to scale to extremely large size you could reuse this kind of technology to get you that kind of connectivity at that kind of density and and make it extremely useful so I'm going to wind up this exercise I hope I've given you a feel for the the kind of elements that would go into building this architecture there's no way I can make this exhaustive so I'll leave you with okay so just a little bit more so I'll take out all the confusing extra wires and you can see that the input coming in over here is actually hitting these cross the crossbars that you see over here for the cells that were in vision okay and and just to summarize what we're talking about from both sides of the picture right so we have our brain which is extremely small and uses extremely little power we're trying to build brains electronic brains so infinitely this is this slide is out of a DARPA synapse project report so this is the kind of thing that true north and neurogrid was supposed to help build so your your brain we're trying to make the equivalent electronic brains regardless of the exact architecture you need to be able to connect a lot of information from a lot of chips using long-range connections this would be dealt with by things like the address even representation I meant I mentioned earlier in those individual chips you would be building dense grids of neurons so essentially neural network layers would be built into that using extremely little power and having a lot of them per chip for each of those neurons you'd for each of the neurons you'd end up having a lot of crossbar junctions delivered through things like membrister nanowires what does this all give you it gives you the the the beginnings of an architecture that will take your GPU clusters and shrink them to the size of maybe a single board with a few of these chips okay so the the story for all of you guys is when you are building your deep learning algorithms or hopefully building things that go beyond deep learning algorithms you have the hardware architecture that's coming out from labs around the world and companies such as IBM that will allow you to then translate all of that into single chips or single systems that are orders of magnitude several orders of magnitude smaller use much less power and you don't even have to change your computing paradigm right now these boards could sit in the cloud just as your GPU cluster sit on an AWS instance and you could still do the same kind of computation right so with that I'll leave you I'll I'll stop talking and I'll leave it open to questions from you guys yes TrueNorth is built by Dharmendra Moda's cognitive computing group okay and are these chips also capable of allowing you to train or they take the pre-built neural network and yeah unfortunately TrueNorth does not have learning on board they don't use memory still devices so the goal for the Synapse project was to use learning capable devices and put them on the silicon substrate unfortunately it's there are practical difficulties to putting this kind of technology on chips it's not yet a reality at least it's not it not yet a reality at scale but it'll come so you still train your neural networks on GPU clusters and use the pre-built ones right so the I think the philosophical philosophy for Dharmendra's group is to train it on their blue gene computers where they have the exact same architecture mapped out as the TrueNorth system so you can do all the training on their blue genes once the learning is done have the deployment done on the TrueNorth chip that's I think that's the that's the way they're approaching it for the saturation I doubt the TrueNorth chip is going to be on a mobile anytime soon but having said that companies like Qualcomm started a neuromorphic group specifically to try and build chips that will eventually go into cell phones so you can have a cognitive co-processor potentially that would sit on your on your phone and do some of the tasks that that are out of out of scope which you'd need a GPU cluster in the cloud for hi when you're when you're talking of nano level the transistors physics also comes as a limitation right sorry the physics of transistors yes that will be limitation nano level so there will always be limitations but but the kind of density you can achieve with what already exists the kind of nanowires that they're using in HP labs and other places is that you can hit the equivalent of 10 part 10 intersections per centimeter squared so that kind of matches what's in your brain the connectivity we're talking about using a memrister crossbar array with today's technology if they can scale it matches your brain in the brain is actually potassium ions which are traveling sorry in the brain and the neurons it is the potassium ions which travel lots of different things are happening but yes yes they are biophysical processes that use ion channel kinetics yes the limitations of transistor physics in the CMOS fabrication are they not affecting that's how for a high speeds optic optical come semi-current optical switching was used because of the limitations of the physical sure so so there are two ways to approach this one you can push the boundaries of physics by trying to come up with new ways you can try to do optical switching for example but there are there is an alternate path where you can use existing CMOS technology because both true north and neurogrid used existing processes in fact both of them were built in IBM foundries right so existing CMOS can get you a long distance towards meeting a lot of these requirements without having to actually invent anything new so it's a very powerful argument that the architecture can trump the limit limitations of physics which is the basic premise right there are two ways to keep Moore's law alive one you continue hitting the boundaries of physics and push for innovation there which should happen of course but you can augment that by exploring alternate architectures that can actually give you that boost because of the way they compute as opposed to how many computing elements there are such that the compute throughput is much higher hi so I have a couple of questions for you one of them is what kind of neurons are these systems able to see able to you know implement so are they like are they limited to say integrate and fire neurons or do they go up all the all the way up to you know right so that's there's a very straightforward answer right there in my slides neurogrid for example modeled ion channel kinetics in using subterracial analog neurons true not for example it's probably more a much closer to integrate and fire neurons because it was all digital so it doesn't have to be one versus other because and they share the fundamental architecture which is how you put these systems together if you if you believe that there's a lot of power in the exact method of integrating and fire the firing for a neuron a neural element you can definitely try different variants and different people are trying different variants but what is the what is actually important is the richer dynamics that you know other neural models are actually able to accommodate right sure you could you could try a whole bunch of different models for for your basic compute element there is no limitation if the limitation is to start trying somebody has to do it and several people are trying several different neural models and thanks a hi so one thing you told us that you want to motivate people to start doing this in India sure right so what's the cost and what's the scale that we are looking at sure is it really possible to do things here right yeah good question so I was part of an academic lab right so in Stanford neurogrid was built in an academic lab it didn't have to be deep inside IBM with millions of dollars to actually be able to do that so a neurogrid chip still in Stanford right it's funding is not logic is no let me give you actual numbers right so a neurogrid chip we built them in batches of 40 or something I'm trying to remember you the way it's done is academic labs give give their designs and they're all packaged together so that they share a silicon wafer and before they go to into like an IBM foundry and and it's all done right so doing a batch of 40 or something like that probably costs $40,000 that's cheap and with and a neurogrid system if you wanted to build it from scratch and when you're not doing it to scale probably will cost you $40,000 but you're talking about a pocket-sized supercomputer for $40,000 and if done at scale okay the cost obviously plummet once you can buy the entire wafer and you're doing millions and billions of these chips or something like that then the actual unit cost will drop down so this is within the realm of possibility today yeah so a single chip could be the same price as a Pentium which I think brings it within the realm of reality so one analogy that's often drawn is between the brain and AI approach and the airline and bird the flying between the airline how an airplane flies and how a bird flies is often compared with how a brain things and how a computer should think I'd like to know what you think of this comparison if you're sympathetic sympathetic to it and what the community has seen to convince itself that the analogy does not hold in this case sure in that it can guarantee success in the longer run right no I'm I'm I'm not trying to punt the question but I don't think it needs an answer because let's just take something that Shailesh had brought up and I kind of believe in so deep learning today is draw some analogy from a lot of neuroscience definitely but it has its limitations because it's not adhering to some of the other principles which we think are important okay but are they useful yes absolutely otherwise you won't have this deep learning case in the world today so at every point in time there there are going to be abstractions which are going to throw out a lot of the the way it's actually done to build something intermediate which will be very useful so I see nothing wrong in that the only problem is we shouldn't go into the soul you know like AI and neural networks dying every 10 years or 20 years because the public gets disillusioned it doesn't have to happen that way as long as the as long as people understand and keep moving on it so hopefully people like myself and other people will continue evolving and building new architectures going beyond deep learning while the world exploits deep learning including ourselves we will exploit deep learning to build useful products I don't see a reason for the world to stick to one versus other you'll always derive some benefit at every level of abstraction any other questions okay will superconductors make any difference to this I know that superconductors feel is pretty nascent it's still yet to evolve a lot but will it make a big difference to this sure I see no reason why any advance is not going to help over here I mean when cover me didn't always starting with these things nano wire members test did not exist so any any new technology once it reaches a certain state of maturity the world will find a way to morph it into this kind of use case so there is no straight answer until it actually is practical to actually morph it right and the second question was we tend to kind of have different abstractions like let's him talking of a dog dog depending on the context if I'm talking of a dog and cat I would treat every drug the same and when I'm think talking of in relationship with respect to something else but when I'm talking of breeds there's a hierarchy and no longer looking at only at a dog I'm thinking of labradars and poodles all within the dogs so and then there is a different abstraction of males and females so which applies to different species and breeds and then it applies also to a labradar also to a poodle so there's some sort of cross connection between these hierarchies in the way we store the knowledge so would it be possible to emulate those hierarchies here sure so let me paraphrase your question if you don't mind so it seems to me that you're asking me whether what we do in our brain which is having a whole bunch of cross associations can be ported to something like this the answer definitely is yes the reason for building some things like this is that you could build larger systems and and be able to model such phenomenon so yeah the answer is yes the advantage of these guys is that they're not sticking to like a deep learning architecture necessarily where you can't necessarily combine information across different networks or provide feedback in a way that works very well yet these on day one are being built with applications like that in mind so without going into further detail I'd recommend that you go maybe check out what coven as lab is now doing with neurogrid in terms of applications such as those things right thank you I see there was a there was an experiment it was reported in Hindu I think this mentioned that they were using they used some portion of rats brain cells and they connected visual inputs and they connected the outputs to a two robotic motors and they were able to make the robot move around in the room sure I think that they called it wet wear sure because it contains the biological processing units and the the electronic hardware I yes to mention it's suggesting that we use wet wear instead of hardware suggesting anything just wanted to mention this and and it's definitely possible though the problem is maintaining organic systems you know the reliability and keeping them alive essentially it's tricky yeah sure no but still if you're talking about cultivating cells in having cells in culture and connecting them to networks one day you know a virus infects your colony your you've lost your chip your wet wire right these are by very definition you can you can duplicate this clone these make millions and billions of these chips without ever having to worry about things like that about losing an entire system yeah the other benefit so there is one additional thing wet wear by very definition you're talking about it having synapses and connections which you can't necessarily measure or capture right the same the reason can you read your brain if the answer is not yet definitely and it's not an easy task to do because how are you going to actually map out those 10 nanometer or extremely tiny synapses trillions of them and actually be able to copy and paste them somewhere and clone it well these these can be done right because everything over here can be read not entirely sure of a membrister array can be read but potentially sorry yeah