 It's unlike any field that I've worked in before where you wake up each morning and they can be anywhere from 5 or 10 new papers coming out that are beating the previous day of the art and just the pace at which developments are happening, it's unprecedented. And so what we felt was that just like the rise of graphics gave rise to a GPU, the rise of AI would need AI processors. Because for deep learning we know that it's a certain kind of computation that needs to happen, dense matrix multiplication, vector addition, application of nonlinearities. So the way we design the chip, we could emphasize those kinds of computations so we're really fast on those. We also know that with neural networks you're often operating on these weight matrices over and over again and so the more of that weight matrix or that state of the neural network you can keep on chip close to where the computation needs to happen, the faster it is and the more you can save on power as well. The ANTAS is looking at the full range of form factors in which AI is going to be used in the world. This is another thing I like about the field is that a lot of the papers, the models and to some extent even the data sets have been open sourced in this field and there's all these open source libraries that people can get started with. Boom, what's up everyone? Welcome to Simulation. I'm your host Alan Sakyan. We are on site at Intel San Diego. We are now going to be talking about the future of AI. We have Dr. Arjun Bansal joining us on the show. Hello. Hi Alan. Thank you so much for coming on the show. Thanks for having me. I'm super excited for this episode. For those that don't know Arjun's background, he's VP of the Intel AI lab where he leads an international team of researchers and data scientists working on both cutting-edge machine learning research and data science to support Intel's products. You can find all of his links in the bio below. Arjun, let's start things off by asking you what are your thoughts on the direction of our world? I think there's a lot to be positive about and to be hopeful for and especially in the part of the world that I tend to operate in day in and day out, which is in artificial intelligence. It's unlike any field that I've worked in before where you wake up each morning and they can be anywhere from five or ten new papers coming out that are beating the previous day of the art and just the pace at which developments are happening it's unprecedented for any field that I've been part of. So I think it's really exciting to kind of see how all of this could be brought to bear to benefit humanity. Oh man, you're right about the speed at which the advancements are happening and trying to keep up with what is the cutting edge. It's one of the things is that you just don't want to learn a skill or learn a technique in like AI or biotech or whatever the technique is. Just for it to be obsoleted by some other cutting edge technique because then it's like all that time that we spent learning how to do something isn't as applicable anymore. And just to be able to like you said keep up to have the benefits be democratized around the world and increase the degrees of economic freedom for people to pursue their most divine purposes on the planet all that type of stuff that AI can be so so helpful for. Who are you growing up that got you interested in computer science and machine learning? You're born in Delhi in India, so tell us, give us the trajectory. Sure, yeah so I think growing up I was into quite a bit of science fiction. Red books like Neuromancer from William Gibson watched the Johnny Quest cartoons on TV and so through that got really into technologies like virtual reality and artificial intelligence. And then by the time I was in high school started thinking about how like what's the state of where those technologies are. And I think both were kind of at a point where there'd been sort of a hype cycle maybe around that time like in the late 90s and by the time I got into college I started going to professors and being like hey as an undergrad can I start doing some research in one of these areas. And it was kind of funny because the professor said hey like nobody's really doing AI research right now they're looking at other areas like neuroscience for inspiration. And so that's how I got into neuroscience started working in a neuroscience lab and kind of as I got deeper into that got really fascinated by how the brain works so just looking at how little of we know about how the brain works. And it was kind of interesting because people were actually using a lot of machine learning to understand the data that they were getting from neuroscience experiments. And so because of that I was kind of close to what was happening in machine learning while also doing neuroscience research and ended up going to grad school in a lab that was doing brain machine interfaces. So recording from primate brains and using machine learning to understand the signals to decode those signals to perform things like controlling cursors on a screen or moving robotic arms eventually with the goal of applying it to help paralyze people move or give them some kind of mobility. Beautiful, yeah. And then after that I did a short post-doc for a few years applying some of this with human data, with human epilepsy data. And after that I decided to make a transition into industry and joined a group at Qualcomm which was working on making a neuromorphic chip. So it was kind of a slow beginning of return back to artificial intelligence so sort of using the neuroscience, but using that to inform the development of technologies on the AI side. And shortly after that with some friends and colleagues decided to co-found Nirvana Systems which was basically building full stack technology to accelerate AI. And we did a lot of great work there and sort of within a few years got acquired by Intel and that's how I've ended up here. Yeah, what an interesting trajectory. So yeah, when you're a kid and you're like learning about all these different cutting edge fields and then you actually got immersed yourself in working on them and also just that that's such an important thing is that when you identify what it seems like you're sniffing out what your purpose is, what brings you most meaning to actually go and seize those moments and go and pursue them and then you kind of opened up more doors for yourself as you kept doing that. And so then what was it then that like when you were when you're figuring out that like there's so much interesting synergy and overlap between neuroscience and artificial intelligence and machine learning and like you said like neuromorphic chips right there's like there's that and then there's like when you're doing Nirvana you were figuring out okay well what is it about this full stack AI that is that we're not thinking of yet? Now how did your mind sort of like start conceptualizing like the future of AI? Yeah, yeah I think a few developments happened in the field outside of what I was doing at the time. So there's this technique of deep learning that got a lot of prominence just around the time that we were starting Nirvana. And this is like ImageNet right? ImageNet was a big milestone that happened just before we started and before that I mean neural networks had been around for a long time like since the 60s and it was just a challenge to kind of scale them to real life problems. There had been some limited applications in terms of reading the zip codes on mail and on reading numbers on checks automatically but it had been hard to sort of extend that to more just images or photos that we might take. And through that ImageNet moment that happened in 2012 that was huge because it was actually applying these techniques on much larger data sets and getting a state-of-the-art performance that was much better than what had been possible before. And that was basically powered by the availability of a lot more computational power and slight tweaks to the algorithms that people had used before. Things like dropout and using a different kind of normalization and then the availability of data. So the internet allowed for a lot more data to be collected and labeled compared to what had been possible before. And so I think because me and some of my friends and colleagues had been in this sort of intersection of neuroscience and machine learning for all those years where it wasn't working, when it started working we kind of had these front row seats to see that, oh, like now things are working and there's this opportunity to kind of dive in and try to do something. And just given where we were and who we were we looked at sort of the hardware performance angle as that initial starting point. And then over time, Nirvana started developing quite a bit of software to make this new hardware platform usable because if you just build a chip and nobody can connect it to their products then that's not very useful. And so we got together a team that was building so for any hardware product you have drivers and firmware so that's kind of the low-level software but then you need to write a lot of software to run some of the basic computational primitives that go into deep learning things like matrix multiplication, convolution, just addition, subtraction happening on matrices and tensors. So we were able to hire some of the best people in the world for that and we had our own deep learning framework so today everybody knows things like TensorFlow and PyTorch but back when we started none of those existed. So we actually ended up writing our own framework and for a couple of years it was the fastest on a lot of the benchmarks that people were using at the time. And then we also had a cloud service because at that time there wasn't any way for people to use AI on cloud services and so we had the stack that went all the way up to providing cloud services and so I think that was kind of the unique thing was both that there's an opportunity to accelerate on the hardware special purpose hardware for deep learning but then also have this very deep stack in order to monetize that get the value out into the world of using this technology. Yeah, the timing was huge that you had already you and your colleagues had already been at the edge for a while and then it ended up being that finally there was a in both actually with neuroscience and AI and then it ended up being that there was this finally this hockey stick moment for the field for you guys to be like okay let's dive in deeper and then when you did that I'm really curious to unpack this with you what was it specifically about the design of the chips and what was it about like the software for it to be able to plug into other people's applications how did you guys figure that out what's so unique about it like why did Intel want to buy Nirvana? Sure, yeah, so on the hardware side I think we had a few key insights as to how we could do better than a GPU which is the hardware platform that was being used for a lot of these AI applications at the time and GPU as most people know was originally invented to speed up graphics applications and it just so happened that some of the computation that happens for deep learning also benefits from the use of a GPU and so a lot of those early wins that happened like with the ImageNet part of that was how they leverage GPUs in order to be able to speed up their computation and so what we felt was that just like the rise of graphics gave rise to a GPU, the rise of AI would need AI processors and with an application-specific integrated circuit Exactly, yeah, so we felt that just like in the early days people would do graphics on CPUs until people said we need something dedicated for this we felt that people were going to be doing AI and GPUs initially but then people were going to realize that this is its own kind of workload and could benefit from NACIC for deep learning and so we had some ideas around because for deep learning we know that it's a certain kind of computation that needs to happen dense matrix multiplication, vector addition application of non-linearities so the way we designed the chip we could emphasize those kinds of computations so we really fast on those we also know that with neural networks you're often operating on these weight matrices over and over again and so the more of that weight matrix or that state of the neural network you can keep on chip close to where the computation needs to happen the faster it is and the more you can save on power as well because a lot of energy is getting data from host memory onto the chip and back and so that was another way we thought that we could do better than a GPU So the two so far is that you have deep learning requires all these different styles of mathematics and for you to be able to optimize those specific mathematics in the ASIC that you designed that's first and then second is putting the computation right there on the chip rather than doing something like sending it to the cloud for the compute and then back is that about or how would that be? I think it's more even more local than that so within the chip having memory on the chip versus going to the RAM or something like that Okay so how would you build RAM on either ASIC like wouldn't that make it bigger Yeah so I think typically you can't have as much RAM as DDR RAM on the chip but there's SRAM that you can have on chip that's more than you may have for a GPU because we know that we're going to need it to store something like the weights of the matrix which we're going to need to keep around so it won't be as big as a DDR RAM that you would expect on a laptop but just relatively more than what you would expect on a GPU Okay, okay, whoa, okay this is starting to click more so having that be local right storing these critical mathematical values as you continue doing more and more of the deep learning process right next to the to your chip is what you would prefer rather than having it go further to the DDR on the actual computer, interesting and then keep teaching us about this it seems like the design of the chip and the design of the way that we process all the complex mathematics that need to be done like this is kind of like the most first principle thing how is the chip designed and how does it optimize all of the complex mathematics that have to happen on it that's like the first principle would you say for optimizing this and I think we know that we don't need to support things that are needed for graphics and so that saves us some diarrhea to not have that logic that's needed to support graphics and instead we could have logic to have more connectivity between chips so that was another big part of our chip is having a lot more bandwidth available for chip to chip communication that doesn't rely on things like Ethernet so that's parallel processing exactly, yeah and so that can allow for building much larger models and being able to run a lot more data through the models so both of those things can help with speed as well as accuracy of the models so then how does one of your chips communicate with another chip normally you said it's via Ethernet they have to be both connected to the same local area network that's a typical way and how do you guys do it we have a custom interconnect between our chips and that's able to provide a much faster and lower latency connection between chips so kind of the vision that we had for our chips was to basically make it available to any data scientist or researcher or application developer as just one big chip like that's basically how they should think about it because I think previously people were sort of building the models right at the edge of what a GPU could do so it was funny if you looked at some of the papers in one year they would be at like 4 gigs and then the next year would be like 6 gigs and so one of our dreams was that let's just make it one big mesh so that people are just building what they want to build and not being limited by what's available on the GPU in terms of memory and so then there's also that was the hardware side of the construction and then what about then how did you make it accessible for companies to want to leverage it did they ever have to buy hardware themselves or could they just access your hardware remotely yeah so basically the idea ended up being that we developed this cloud called Nirvana cloud and initially we just built it on GPUs because that's what was available while we were developing our ASIC and as I said before we had this framework called Neon and Neon was built in a way that you could run different hardware back ends underneath it and so if our customers built their AI programs on top of Neon then they didn't have to sort of think about the underlying hardware is it CPUs, is it GPUs, is it the Nirvana processor and so the idea was that when the Nirvana processor sort of is fully developed it would come into the cloud and customers would just see the performance improvements and not have to rewrite any of their code and then is that kind of what ended up happening is as you guys made your ASICs you added them to the cloud and then so yes I think we got acquired before we had the first rev of the chip back and sort of post acquisition the focus has been more on the silicon itself and not so much on the cloud service it's given Intel's business model and so that's where we have focused and then you had a good amount of diverse organizations that were using Nirvana cloud for their computational purposes and like why were people picking the Nirvana cloud over the Google cloud or Microsoft cloud and whatnot because they didn't exist at the time so we were actually first to market with that kind of a product so when we came out some of those competing products didn't exist and yeah I think people were sort of wanting to try out something that's really custom built for AI and like you said we showed that people could apply to agriculture energy, healthcare, government, finance so we had done engagements in each of these different industries and we'd shown value in computer vision and speech recognition and national language processing so we had a pretty good matrix of customers across these different domains and application areas yeah that's huge okay so a variety of different organizations and leveraging it for all the different artificial intelligence purposes and then was there ever a concern with the companies because like you said this was pre-Google and Microsoft clouds the AI clouds oh they're AI clouds okay yes good clarification yeah pre-AI clouds then they now have their own like kind of like ASIC style clouds for AI specifically as well but at least Google does okay was there ever even back then this concern and like what do you think about the future of this concern about like keeping my computations local versus shipping them off to the Intel Nirvana chips or to shipping them off to the Google AI cloud versus keeping them local what are your thoughts but do you still feel like there's going to be a great way to encrypt that and to still go and send that like is that what you think is the future there I think there's probably going to be a bit of both I think there's some applications where people are more concerned about privacy and there is more of a drive to keeping things local but I think just like with public clouds in general right like not even thinking about AI but just general data being on clouds or computation happening on clouds so I think it's just part of that I think I don't know if AI brings in any special differences so I think if an organization is comfortable keeping its data in the cloud and doing it sort of regular computation on the cloud then they'll probably be okay doing the AI computation in the cloud as well but if for privacy reasons or other reasons they need to keep that local then they would want to keep the AI computation local as well but also on the technology side there are things like privacy preserving machine learning which another group here at Intel has been working on privacy preserving machine learning so the idea is to be able to do machine learning on encrypted data and then that way you don't have to share your personal information with the company and still use it to do something useful and provide a service to you so that's kind of an emerging technique which if it succeeds could be a big breakthrough in terms of just how AI is done and how computation happens for these tasks So then with the AI lab at Intel then is part of what you're doing then figuring out like for NLP or for image recognition that we have to build a specific ASIC for just NLP or for just image recognition So we haven't gone to that level of specialization yet we find that there's certain primitives and computational motifs that are shared across these different domains because they're all sort of operating within this framework so there's a lot of learning and artificial neural networks which requires dense matrix one application and vector addition and nonlinearities and so even though the applications are a bit different the math and the computation is similar enough that we could service the needs of these different areas with one chip it could be that in the future there is a need to specialize okay and then the other thing is on the edge side of things so Intel kind of makes products all the way from the data center to the edge and on the edge you kind of have more constrained environments with lower power and yeah I think lower computational power as well as low battery consumption and so forth so there there could be more of a case to have chips that are more specialized for computer vision if you're going to put them in a camera or augmented reality goggles something like that versus if you wanted to put something in like a personal assistant, smart speaker type of device well okay so then our future could very much so be specific chips for specific applications and everything from cameras and sensors all the way to what is the actual is it doing a language processing or an image processing so it's both the data that it's processing as well as the kind of like you were also giving an example like the does it have to be on 24 seven or does it only need to turn on every like couple minutes to take an image of the area that it's looking at or whatever the scenario could be interesting whoa custom designed chips for all different needed applications yeah yeah I think if you need it to be very efficient from a power perspective that's kind of the way to go is to make it very custom but the trade off there is that these algorithms are still evolving so every few weeks few months you have better image recognition systems or better speech recognition algorithms better national language processing algorithms so if you make something too custom then the danger is that it's not going to be at the cutting edge in terms of accuracy of the model and so that's kind of the trade off we're dealing with as people who make hardware and other companies that are making hardware always are trying to find that right balance and then we speak to the importance of the data the data being structured this is something that is now coming up more and more often in the way that we see AI is that the having unstructured data makes is super challenging we there's so many companies now that are even just focused on just structuring data to make it easier for us to be able to have artificial intelligence applications happen so what are you doing with that and how is Intel working with that sure yeah I think Intel sort of mostly operating at the hardware layer I mean we do have some open source libraries and so forth but I mean one way to think about it is that a lot of deep learning has happened because of label data sets and that's enabled supervised learning and recently we're seeing that semi supervised learning unsupervised learning is having some successes but maybe still not at the level of supervised learning yet and then unsupervised learning or semi supervised learning ends up needing quite a bit more in terms of computational resources or sizes of models a little bit of what we're seeing with the national language processing models that have really taken off in the last six months to a year and so I think that's kind of a trend that affects hardware is that if you're learning with unlabeled data you could be using a lot more of the data compared to what's being used so far like I think there's some stats like only 1% of the data in the world is labeled and the rest of it is unlabeled and so if the algorithms get to a point where you can really start leveraging that unlabeled data as well then that automatically means that there's need for a lot more computation and that starts having a big impact on what we build and then what else then with on the hardware side of the future of what's going on at the Intel AI lab are you seeing? I think on the hardware side it's basically just this trajectory of providing more and more compute, more memory so people can build larger models train them faster and then there's this distinction between training and inference as being sort of separate types of problems with inference you care a lot about latency and power and often you're just getting one sample at a time and you want to be able to return a result back so it sort of needs different kind of hardware and then there's this distinction between data center and the edge so data center is less power limited compared to the edge and so you have some different kinds of trade-offs there. Explain the data center versus the edge trade-off. Yes I think like edge would be you know anything like chips that could be in cameras or smart speakers or self-driving cars and there's they may not be connected to a power source so generally when you're designing hardware for those kinds of form factors power ends up being a big issue with data center you could think about you know social networks or search engines or online photo websites where you're like uploading pictures and you want to be able to search through them pretty quickly so that's a little bit different in terms of if you can batch images together and you don't need the response right away in some cases I mean in some cases you do like in a search query you might need it right away but there's a lot of tasks where you don't need it right away so yes I think you just look at sort of what is the application that the customer wants do they need the answer right away like does the latency need to be low or not and are they going to be giving it one query at a time like with a smart speaker each time a person asks something you want that answer right away you can't wait to batch together like 100 different requests right so whereas with images if I upload like 10 images or 100 images I can just put them all together and do the computation on them so when you can batch things that's more of like a matrix matrix type of computation that needs to happen whereas when you can't batch things that ends up being more like a matrix vector type of operation and so we just have to make sure in the hardware that these two different kinds of computation are prioritized depending on what the customer is going to be using it for okay so in a sense then maybe edge is kind of like what is like a camera sensor on an autonomous car or that it's just kind of receiving the data moment to moment and feeding it in for compute and then the other one was the data center itself or this on that is kind of just like the centralized where the data is coming in and being processed so that's like edge versus center exactly that's right and I think as you said before often when there's privacy issues then those edge devices kind of play a role there where you can just do the computation right there and you don't end up needing to move the data to the data center to perform that computation interesting okay but there can also be latency reasons for doing that with a sub-driving car for example you can maybe have like a like a 50 millisecond latency versus a whole second latency which is yeah can be too long for certain decisions that need to be made and whatnot if it's localized if the compute is localized instead at the edge so local compute at the edge is a big deal for decreasing latency and for security as well so is Intel kind of like taking that into account both like you know figuring out chip designs for being at the edge and having that lower latency versus having the like you said all these parallel computes happening in the data center itself yeah yeah yeah Intel is looking at the full range of form factors in which AI is going to be used in the world and so you have CPUs themselves which go from you know big honking zions and data centers to cores that are on laptops and and then also atoms that are in many of the devices and then you have GPUs like the integrated GPUs that are present in a lot of laptops and desktop devices you have the Nirvana processor that we already talked about for the data center both for training and inference and and then Movidius was another company that was acquired around the time that Nirvana was acquired and they really focused on the low power edge part of the spectrum and then Intel also bought Altera a few years ago which makes FPGAs and there's customers who use FPGAs for AI as well Do you ever kind of like wonder about you know like you guys were being acquired all these other people are being acquired you have to like manage an international team all doing different aspects of chip design and manufacturing for specific use cases does it ever feel like maybe Intel's gobbling up like a lot of what's happening No I think AI is kind of a space where you know Intel's still working towards a leadership position so we're not there yet but it's great that we have all these different architectures and it gives consumers a lot of choice in terms of not getting locked down to just one architecture or one way of doing things Who are the other chip manufacturers right now that are like prominent around the world So you know this NVIDIA has GPUs that are used quite a bit for these kind of applications and there's a lot of startups that have changed in the space by some counts there's something like 70 different startups 70 that are working in the space So there's definitely a lot of competition very interesting space, interesting times I think In China Huawei doing any of the chip designer who's doing the chip designs right now I think there's a few different companies that have announced plans to do chips I think Huawei is one of them I don't remember the top of my head, all the others but there's a few But Google and Microsoft don't do any chip designs themselves Google has the TPU and Apple does not Apple has something called the neural engine that's on the iPhone so that's more for the inference more power constraint type of environment Where does Intel's AI lab overlap with the future of quantum computing So the quantum computing group is a different group in Intel Labs so we're not part of that group so I think it's a different group at this point But then where would like where would chip design overlap with quantum computing future You would want to design for the potential purposes of what quantum mechanics can bring to computation Sure My personal view and I think the view of many others is that the quantum computing chips and that wave is still at least 5 years out if not 10 years out so in the near future A6 for deep learning but kind of the next wave could be quantum It would be interesting to touch base with you again in the future where we have A6 around quantum computing that would be crazy like how are the chips going to be designed for the quantum computing era That's some really interesting feel that There's a lot of startups and big companies working on that so there's a lot of activity It's unclear when it gets to that level of productization I think they're sort of slowly working up to getting the I think I've heard that maybe getting to like a thousand bit quantum computer would be like a big milestone when maybe some of the big applications could be unleashed And then what about some of the issues that you are experiencing with AI I think one of the challenges is just getting the benefits of AI into a lot of products and applications that people can use So I think there's been all these promises of autonomous driving or AI applied to healthcare and just a whole bunch of domains where we thought AI was going to show up and it's taken longer than we expected like five years ago when we were starting Nirvana we we would have thought that AI would be a lot more visible in our everyday lives compared to where it is today And I think some of the reasons for that are just like it's different in different domains but if you take something like self-driving cars there's all these corner cases where if you're at an intersection you need to be able to understand the gestures that the other drivers might be making in order to make the right decision So I think AI technology tended to focus a lot on things like identifying objects for example and could do really well on that but missed out on sort of this human interaction and understanding component which I think is going to be really important in terms of getting it into products which ultimately they're going to still be interacting with humans so even if you have the best technology if you don't solve for that problem then it's not very useful It's easier to detect the stop signs that look the exact same than it is to understand what a hand gesture is being at the corner where you're deciding on what the next action is That's right So then what does then the process of democratizing the benefits of AI rapidly and effectively around the world especially as we start going into the general intelligences and the super intelligences and where does even chip design come into play with being able to handle like a super intelligence Yeah I think there's kind of maybe a few phases to it so I think what we've seen in the last five or six years has been huge advances in what people typically call narrow AI So in a very narrow application AI can do super human things and that's been powered largely by deep learning and we can build chips that speed that up that build larger models and so forth I think that trend is going to continue and we can build applications out of that that are sort of features within products I think that's how it characterizes that you can go on social media sites and translate what somebody has written in a different language it may suggest labels to you of who to tag in those images and all of that is being powered by AI and probably in the next wave what needs to happen is a little bit more of this human understanding like I was mentioning before and some level of human reasoning kind of capabilities in these systems that may be tied together a bunch of these narrow AI approaches so then you start interacting with systems that have a little bit more human like properties so maybe not full on AGI full on AGI is like humanoid robots that can do everything a human can do better which I think is still pretty far out I think that could be this intermediate phase where you have maybe these things called agents which are broader than narrow AI today but not full on strong AGI so maybe in a very narrow domain of human activity they can do as well or better than what a person could do but they can't do everything that a person can do and where does chip design come into play for being able to get to that point of agents and then the general intelligence is I think the question there will be what are the algorithms that underlie those kinds of systems are they just a continuation of deep learning and more and more dense computation dense matrix map or is it some fundamentally different kind of motif like operating on graphs for example or some kind of sparse computation or like something that requires trees or branching structures or you know loops and so it really depends on what are those constituent motifs of whatever that system ends up being so I think that's still kind of an algorithmic problem right now and to some extent there will be some co-development right because researchers tend to explore the areas where they can run these experiments in a short enough period of time and that tends to happen on hardware that's available and the capabilities of that hardware as they exist today so I think that's kind of interesting like how that co-development between hardware and software happens and that will also sort of guide the field in general towards certain kinds of exploration yep and what would Intel and other chip manufacturers around the world what would maybe be an important principle for like what does Intel already do or what would you say would be a good way to democratize the benefits of artificial intelligence around the world there's so many people that are still um are still exploring their degrees of freedom and figuring out what they want to do around the world they need their basic needs and all these types of things chip design is such a very like you know a top percentage of computational awareness and understanding around the planet and so how do we get the benefits democratized sure yeah this is another thing I like about the field is that a lot of the papers the models and to some extent even the data sets have been open sourced in this field and there's all these open source libraries that people can get started with and so I think it's pretty easy for anybody in the world to get started with this if they have like even if they just have a laptop and then typically what you can do is get a pre-trained model for a particular application and then fine tune it on the data set that you want to apply this model on and so that fine tuning step ends up being not as computationally intensive as the original step of getting that first model and in many cases you can get these pre-trained models from model zoos of the various frameworks so I think there's a lot that is happening there's like free courses that people can take a lot of the eminent professors in the field have open sourced their course curricula and put up videos so and I think yeah I think there's definitely a lot of good work happening people in the field who have been going to Africa and doing courses there and so yeah and also at the conferences I think people do want to share as much as possible with the community I think some of the challenges are that that original step does end up requiring a lot of computation a lot of data and a lot of that original data is proprietary to the companies and so I think that's something that we need to think about how we can address but I think outside of that there's a lot of really good things happening and then with all of the different things that are kind of you know being built into our future including like the computational capacities or the like 5G infrastructures or the newest biotechs or neurotechs that are evolving around our world does it ever feel like that we need to maybe like slow down and do more longitudinal testing on our own health and how it affects society versus just like kind of like move super fast how do you feel about that yeah I mean I think like right now the way things are progressing it is sort of whoever can move the fastest and but what I'm also seeing increasingly is companies setting up ethics boards and having AI for social good types of endeavors and we started one in the AI lab at Intel here as well an ethics board an AI for social good group and we are sort of deliberating on a potential ethics board as well that's cool so we are taking it pretty seriously and we stay engaged with inter company groups like partnership for AI which tries to study these issues from a community perspective like issues that might affect all companies and tries to inform standards and laws and so forth but yeah I think I haven't seen anything in terms of just maybe stopping outright maybe there's been a few cases recently where I think there was some you know deep fakes and deep nude kind of algorithms that were published and I think those sort of very clearly crossed the line and so researchers said no we don't want to put this out in the world and I think some of those were at least taken down pretty quickly Interesting yeah I like the optimism with ethics boards and AI for social good boards I like that a lot that definitely I hope makes people feel more comfortable and more trusting that there is it's so important to have ethicists and philosophers and moral scientists that are working together with AI scientists and biotechnologists etc because the separation of there is like it seems like a recipe for disaster and the reason why so many civilizations before us have had collapses and other issues because they just started playing with godlike technologies without being spiritually advanced or yeah let's actually talk about that like what has been your relationship with your own spiritual growth over time with god with source what is your relationship with that sure yeah I think I've tended to be more scientific cleaning for pretty long time maybe I was lucky also growing up in India that I was surrounded by a lot of very spiritual religious people and my grandfather he was very well read so I spent a lot of time with him and sort of discussing these kind of issues around science and spirituality and so yeah I think I would say maybe I'm sort of 90% on the you know scientific path but I do leave out some room for intuition and we still don't understand consciousness really well and so there could be things there that we don't really comprehend and maybe there's something fundamental to the nature of reality that we haven't discovered around consciousness but so far we don't have any evidence to support that so it's mostly sort of experiential that a lot of people believe in something bigger yeah that's what we have to go by yeah so then like even from the science side when we think about like your parents and their parents and their parents and their parents and mine and mine and mine and like all of ours right ancestors it all does go back to a single source even from a science side yeah yeah still goes back to a single source of Eve I guess the mitochondrial Eve yeah and even going pre-multi-cell pre-single-cell the seed of life on this planet pre-even Big Bang kind of so now that's the question is like if it all does come from a single source of creation a single origin then isn't it then all interconnected isn't it all then God isn't it it all that yeah I guess the flip side of that is with quantum mechanics right that what we learn from that is that the world is very non-deterministic and yeah basically you can't predict some very basic properties of the universe because they're stochastic and so even though it may have started from this primordial atom where we are today is it's a much much higher entropy state of the universe and but then on the other hand we still haven't resolved quantum mechanics with gravity and so maybe there's some deeper theory that combines the two that you know we haven't discovered yet that sort of explains that maybe there is a connection at some kind of deeper level that we haven't understood so we were just interviewing clear one with quantum gravity research just yesterday actually and it's here they have an emergence theory that they're working on that bridges the space time with the quantum mechanics it's very interesting thinking about what is it that then makes it so that the very simple laws of this origin become super duper complex where do you think that free will plays into the equation yeah I think that's a really tough question and you know there's free will at different levels like recently you have people like Ubal Harari sort of saying that in order for humans to stay relevant in an age of AI or AGI we have to really hone our free will like know ourselves better than the algorithms know us and so there's sort of free will at that level that our algorithms are getting so good at understanding us in some ways where they can predict what we're going to do or what we're going to like on par with what we would predict or maybe better which can be used for good maybe recommending music you like or movies you like or clothes you like or for bad in terms of recommending ads that are fake or things like that having big consequences for elections and democracy and news and so forth so I think that's kind of the free will even without getting into the spiritual level there's free will at just a technological and societal level which is changing very rapidly in the environment that we're in right now and then kind of at a spiritual or maybe neuroscientific level it's something that we used to think about a lot working in neuroscience labs for all those years where we were actually recording from the neurons which lead to the behavior 150 milliseconds later so you can record from a neuron that encodes that you're going to move to the right and very reliably you're going to move to the right 150 milliseconds later if that neuron goes off and similarly for more complex behaviors like speaking or grasping objects all kinds of things can be predicted from the activity of these kinds of neurons and then the question is okay so what led to the activity of those neurons and you can kind of work all the way back to a point and one view is sort of this Francis Crick view right so I read Francis Crick's astonishing hypothesis in the 90s one of the books I think that influenced me to also get into neuroscience and the astonishing hypothesis being that everything we are everything we decide is just a pack of neurons and and so that's a very materialistic view that everything can be explained based on the physics of what's going on in our brains and but I think there's still maybe and I think I'm mostly subscribed to that based on what I've seen the one open question probably is just why is there a subjective experience at all the sort of qualia problem and the simplistic explanation is that that's just the correspondence of the state of millions and millions of neurons in your brain with some external state of the world combined with the experiences you've had up to that point which have led to the configuration of the brain in the way that it is so so I think that's one explanation which could be true might be true but it's very hard to kind of falsify or prove and so I think there's some uncertainty around that still even maybe just a little bit even does it ever then feel like there's a certain like maybe forces that are at play through humans like channels and then on this like big board game of planet earth yeah I don't know if I subscribe to that view I think it's an interesting metaphor and it kind of helps understand the world and and then maybe some of what religion was trying to do was provide enough story that people could understand and you know maybe people several hundred years ago for most of human history haven't really had the time to get deep into these issues like we can because you know we're not working most of our days trying to just get our food and so forth so so I think maybe that was a way just to like simplify some of these and some principles for people in terms of how they live their lives how to have early society and so forth but they also make really good movies like star wars you know good versus evil the force and the dark side and those are like timeless stories they never grow old but if it's I don't know I don't think I believe that there's just something fundamental in the universe along those lines and then how about the overall teleology of the species like what is the purpose of the human experiment are we all just these beautiful creative expressions of creation that are all making different paint strokes with our lives are different you know notes that are being played from instruments in the symphony of life is that the point or what is it I think that's definitely a very beautiful way of looking at it my personal sort of experience and view has been more around curiosity and sort of maximizing our understanding of the universe and I think yeah I think just satisfying kind of more and more of our curiosity and just staying curious as well as sort of giving that additional meaning beyond what biology might say that we've been built for and I think that's something that could be independent of us as humans it could just be a value that you know let's say in the far future we have AGI forms that are not carbon based if I think it would be like I would consider humanity a success if you can infuse in those beings this same kind of value of curiosity and I think from that a lot of the humanistic values also flow like around kindness and empathy and a lot of things that we consider morally good just kind of naturally I think come from curiosity about who other people are, where they're coming from what's driving them to do the things that they're doing yes embedding super intelligence embedding consciousness in super intelligence is critical otherwise it's like Disney World without any kids to enjoy it yeah I think that's almost like a parallel question will they also have this subjective experience like we do and it's going to be very hard to test that because they can just say that they are having it but how do you really know how do you feel yeah how does super intelligence feel yeah very interesting question interesting and I think another point there maybe is just that they could help us solve some of our limitations totally and run billions of permutations of creative solutions versus are just like I can barely abstract the reason like six things at the same time yeah yeah I think there's a lot of potential there because we're great creatures but we are limited and so if we can use super intelligence as a tool to get to the next level as a society that's again very exciting yeah it could be then that it is that maximizing that curiosity the amount of consciousness like how much consciousness can the universe have that's experiencing itself and like that's kind of this so can we make it 100 billions of humans having meaning being creative being curious across the cosmos yeah that seems to be one of the ideas of what is the meaning or the purpose of it all does it ever feel like we're in a simulation it can yeah I think you know I also watched The Matrix in the late 90s and I was very excited by it I think my take on that these days is that it doesn't there's no way to again test it and it doesn't have an impact on what I do so for all purposes it's sort of not very consequential what the answer to that question is we may be able to poke with science at it soon and that could be it could be quite interesting yeah also that's a very good question it's also like leveling up regardless we keep leveling up our characters we keep achieving our North Star divine purpose more and more every single day regardless it's like a great thought experiment and all this other type of stuff what do you think is the most beautiful thing in the world I had a daughter a couple of years ago so spending time with her and watching her grow up is definitely very beautiful of course I'm sure we're wired to believe that as well as a species my children are so beautiful I love watching you grow up probably also the source of a lot of problems in the world selfishness and so forth I want to make more copies of meat so yeah we have to watch out for that while acknowledging that but yeah I mean I think we're equipped to appreciate beauty in nature in relationships and yeah I mean we like through meditation and we definitely have this this tool in our body in our minds to to seek it to appreciate it and everybody has different ways of receiving it so we should all try to make the best of that music is another one yeah the beauty being in the experience of the person experiencing life Arjun thank you so much for coming on to the show this has been a huge pleasure thank you so much for coming on and huge thank you to everyone for tuning in we greatly appreciate it we'd love to hear your thoughts in the comments below on the episode let us know what you're thinking have more conversations with your friends your families, coworkers, people online on social media about the future of AI and the future of different chip designs and all of the different complex things that we talked about in the episode have more conversations around these things also check out the links in the bio below profile as well as the link to it on twitter you can check those out and also support the artist entrepreneurs, the organizations the leaders around the world that you believe in support them, help them grow, support simulation our links are below to our PayPal, Patreon cryptocurrency, all those links are below help support us and also go and build the future everyone manifest your dreams into the world we love you very much, thank you for tuning in and we will see you soon peace