 Hey, thanks for joining this Christian Buckley with another MVP Buzz Chat. And I'm here today with Annan. Hey, great to have you. Thank you, Christian. Thanks for inviting me. It's good to be here. Yeah, so folks that don't know you, why don't you introduce who you are, what you do, where you are? Sure. So hello from sunny Florida. My name is Adnan. I am a chief architect for artificial intelligence and machine learning for a company in NUSD Global. It's a consulting services and products company. I am also a Microsoft MVP and a Microsoft regional director, a newly minted one. My MVP is in artificial intelligence. That is my area of focus. Before becoming an AI focused MVP and architect, I used to work in financial technology space where I built a lot of large scale systems with MasterCard, used to work for a startup called GreenData. It's no longer a startup. It's a billion dollar company. But that was my specialty in building microservices and cloud-based architectures. So that's what I used to do. I'm also a visiting a scholar at Stanford University. I work with Chris Manning, Dr. Chris Manning's group for national language processing and understanding. So I manage for my company, this relationship between MIT and Stanford academic relationship that's called Alliance Program. Along with that, so that gives me what I like to call is the intersection between academia and industry. So my PhD is in machine learning. I try to take the exciting products built at academia, especially the Stanford AI lab, which is the birthplace of Google, and try to apply that here at industry in various variety of sectors. So I work closely with retail, finance and healthcare. These are the three key areas. I try to bring in the latest and greatest from academia in terms of innovative detection on predictive analytics or deep learning and then try to apply that. So that's like a decent introduction to the thing they do. So I have a former business partner. I had a consulting company when I was living in Northern California who was also a visiting scholar at Stanford, very smart young man. He's now a Stanford put him through law school. And you know, you're a certain level of smart where Stanford pays you to go to school at their university. But he is now a practicing patent attorney and worked for Google for years. And a good friend of mine, Dr. Michael Meehan, but we worked at some startups together back 25, was that long ago? 25, 30 years ago? Wow, it's a long time. 25, 20, yeah, like 20 years ago. Okay. Yes, not. How old are you back then? 16? Yeah, I'm not that old. So I was in, yeah, so I met him when in my early 30s. But anyway, yeah, so it's, I have to say that having that relationship and that, so that startup that I did with Michael, the great thing about that was the relationship that we had with Stanford and UC Berkeley. So a lot of clients that came through that. We work with a number of faculty members in high tech and so for as a consultant to work with a lot of these newly funded, you know, venture backed startups. It was fascinating just to be at that time and that location and be associated with those two schools and just feel like, you know, we weren't creating those technologies and we weren't formally part of those startups, but we got to hobnob with them and hear kind of the formation stories of these is just a very exciting time. So I enjoyed those years with that company. So, yeah, it's exciting to be in that environment. Yeah, it's like you, just the energy in there, like Stanford AI lab, you are walking and you see Dr. Faith Ali. She is one of the leading figures in machine learning and deep learning and she is a computer vision. She is the person in computer vision and then you see like your Alaska issues that you've architect at Pinterest. You see like Chris Manning who is the leading fee, like the books, you read their books in graduate studies or undergrad and then you meet these guys, your Alaska issues. And they're just, I'm assuming they're just hanging around. They're like the cafeteria. They are ready, easy to easily accessible, right? Christopher Ray, he sold his company. He built this deep learning platform, which actually is used for detecting human trafficking pattern. So it's actually used, yeah, it's funded by Darupi. It's used, it's a very fascinating case study. If you look at, it's called, I'm blanking on the name of it, but Christopher Ray, he built this whole platform around looking up dark data. And then it's a company which came out of it. And then it was sold to Apple for like, I think close to $200 million. And you just see him walking around. So yeah, these guys, Sebastian Kern is another one of the guys, James Zhao, Dr. James Zhao. So it's like, and it's there in MIT. Like you will see all these luminaries and it's fascinating to work with them. Well, that's very cool. Yeah, the, I don't know if there's a lot going on. I was just thinking too, it wasn't that long ago that Microsoft added artificial intelligence as a domain, as an area for MVPs. Are you one of the first AI MVPs or was it around longer? I was, wasn't it like four or five years ago when they introduced it? Was it longer? Not to my understanding. No, it's just about three years. So I was one of the first MVPs. Originally the first batch was around 46 people. And I think Microsoft research has been in the forefront of this area for a long time. They have done a lot of work in that area from an implementation perspective, you can see a lot of early products came out of MSR. And Azure has adopted it earlier. So I think from an MVP perspective, I guess it was not as early as it could have been. But it was right away as soon as the products and starts coming out, it became an area like you call them computing. Now you see is that quantum as an MVP is also available. So an Azure ecosystem is fairly mature now when it comes to analogs and machine learning implementation around both reinforcement learning, deep learning and machine learning capabilities. Well, I know, so you talk about that. I think that's, it was exactly my point. I mean, it came later, but Microsoft has been doing things for many years. Being back when I, so I left in 2009, I used to go to the, over in building 99 where Microsoft research was primarily based. I know where, if they're spread out they're still in that building. But I used to go to their demonstrations that they would basically, they do these presentations of a bunch of things that they're working on and they were fantastic. And of course they brought in other guest speakers. They did other events, but it's really fascinating to see in the audience. And sometimes, and I would be, I was over at the time I was in online services. I was over in Microsoft advertising. But just to hear more about these other pure R&D efforts that are going on and sitting around me were a lot of the product teams. So like the office folks, you know, that hearing about it for the first time too, which actually kind of surprised me. That's a different topic. Like that R&D research could be so disconnected from what the, and I know, you want your research team to be disconnected because they're out thinking things up and building things. It could take years to do that before you start thinking about productizing those. But it just seems like there could have been tighter integration, which is again, what the labs tried to do. But again, that's a longer story. But I guess, so my question is that I'm finally getting to is around, what are some of the things that you have come up within the Microsoft ecosystem in the artificial intelligence machine learning space? Yeah, that's a question, which made it quite a show of its own. But if you look at MSR in general, the things came out of it, 2009 and 10 era they had info.net, which is an inference framework from vision inference. It actually has like a bunch of different inference capabilities. This pros.net, which is actually a program building by examples. There is a lot of machine learning models and deep learning models, which came out of that research in general. And there's a lot of work which went into the back end for Azure. For example, data center monitoring for the power consumption, predictive analytics based on that usage perspective, the anomaly detection outlier analysis. So I think based on my understanding, and again, you were there, so you probably know better than I do, there was a tighter integration in certain groups. For example, with Office Group, there was a lot of integration. Delph, for example, Microsoft Delph actually came out of the MSR. There was, and then maybe in some other groups, there was not as much. One of the, I guess key things you can see is from academia or research to applied implementation pipeline. You can see a lot of things in terms of products which are coming up like computer vision, the entire cognitive services part of the thing. That is part outcome from there. There is also, of course, your whole deep learning landscape around CNTK, rest in peace. There's a few things which came out over there. I think one of my favorite things which came out of MSR in terms of that is the language models. And you've probably been hearing about the language models. So you have birds and animals of the world and the transformer model. So if you look at the language models transformation, I work closely in NLP and that's something which is near and dear to my heart. We used to work with like background word models and TFIDF and LDAs of the world. That's old school, I guess, NLP and grams. And then from there we moved to a little bit more sophisticated models like using RNN for sequence generation that occur in neural networks. And from there we came to LSCNs which are a bit more sophisticated than they are part of RNN class, but it's more sophisticated. And now we are switched to this transformer model which are almost like magic. So you can predict what a person is going to do, what a text is going to say, or you can generate sequences, you can do comprehension on documents, which is magical, you can understand documents without any prior knowledge. It's just trained on logic, Wikipedia, pretty much the entire internet. And then it understands in an unsupervised manner how close two concepts can be together. Like for example, if you ask who is the protagonist in Makix or who is the hero in Makix or who is the leading act train maintenance, all of these concepts are closely aligned together in a high vector dimension space. So you don't have to train the synonyms. It knows the synonyms. It knows the concept hierarchy, the transformer network. So you can just put a PDF document and ask questions around it. So Microsoft has just recently released a much larger transformer model. And I think that's one of the things which is very elegant about that solution that it can actually work in in a supervised manner. There are certain challenges in terms of ethical AI and operationalizing those models. You probably remember that whole debacle around by an open AI decided not to release one of the models. But it's fascinating that space is moving. And of course it's not without its challenges, especially in the day and age we are living, the importance of ethical AI can never be understood. Yeah, I know that you've, I saw in your status that you've been talking about that a lot recently and that's been, I know it's something that Satya has brought up. Of course there was the, what was the name is it was the Cortana-like experience. It was the AI persona that Microsoft put out there. And within hours it became racist. I mean, it was just crazy fast that it went and people were able to, based on questions that was being asked and down the, I know the rat hole that it went and looking for information back and up around there. And if you can hear my little fur babies that are upstairs and the UPS guy must have delivered the package. But anyway, there's, yeah, it's been fascinating. Now I'm not keeping up on everything I read. A lot of the stuff that comes out, I've got good friends, Naomi Moneypenny joined Microsoft a few years back and she's at the center of like Project Cortex and a lot of what's happening there. And so even through Naomi, before she joined Microsoft, I would hear of a lot of the developments, a lot of the efforts that are going on in that space. And just my observation as well is that there's a lot of shifting of roles that have happened as happens with Microsoft employees at the end of their fiscal year. And a lot of people are moving over into that space. There's a lot of Microsoft is building up a lot around AI in general. But from a collaboration standpoint, which is my primary focus, and even from a project portfolio management, the intelligent system standpoint. I mean, Project Cortex is really exciting. There's just a lot of things, anything that is around collaboration around knowledge management. I mean, I'm very keen on seeing and hearing more about. I don't know if you're working in those spaces specifically. Do you have any other deeper insights into what Microsoft is doing there? Microsoft is doing amazing work in the AI and machine learning areas in general. So from just in looking at the document management or knowledge management, I shouldn't say. So the knowledge ground structure, they have an Azure. I don't know if you're familiar with probably the, probably have seen the FDR files demo already, have you? So that's a fascinating demo because it's a very interesting demo and it also shows a lot of different capabilities people missed. So from a knowledge management perspective, a lot of companies like I struggle with the idea of being able to take, like a document, like actual document, physical document, and then how can you identify entities in there and how can you abstract entities out of there? Applying OCR, so cognitive OCR. So you have this form recognizer now, which can actually do this. And what is typically missed is how it can work with dynamic different degrees of forms and how fault tolerant it is. So that's a really great perspective of that. It can work on different dimensions of the forms and be able to read and apply more sophisticated AI on this. I'm using AI and loosely here. I'm sure you're gonna get some hate mail about that. But that's one of the areas there, the knowledge management across industry is very valuable to be applied. So insurance companies and fintech, you will be surprised to see how many LLC forms get filed in a manual manner, even though we have all the different technologies available to do data ingestion or processing from more of a computerized data or digital data. But there are tons of use cases around it, including medical records. So that solution, the Microsoft Azure and cognitive services solution, which actually does that data ingestion is very powerful because it does not only ingest the entities but also keep track of that as a knowledge graph and then you can do search on it. One of the, I think areas which is very powerful and we were talking about ethical AI is that Microsoft is trying to work a lot in the area of the transparency of AI. So you probably have heard of FairLearn as a toolkit being released. So it's recently been released, FairLearn, and I think they announced it and built don't count on exact date time of that. But there's also another toolkit which just came out of it called InterpretML and that's a part of that interpretability alignment. So the idea here is that if you are deploying a model in the cloud, how would you be able to make sure that it's secure, it's trustable, and it's robust and it's fair? So if I build a model and I'm a machine learning engineer and not a SME and I deploy it and it may be gender biased or race biased or something from data which came in there and I can say, oh, I didn't know about this, the data is biased. But it's my ethical responsibility or the organization responsibility that's an AI governance thing to understand the bias in that and then actually report it to the SMEs and SMEs needs to see this. So this is the part of the AI lifecycle. So one of the things which Microsoft has done earliest is came up with this data science lifecycle called TDSP. So you probably have heard of, you know, that's Chris DM which was used to be a back in the day lifecycle, Chris DM for data mining lifecycle. And we have of course a traditional SDLCs but data science being its own beast, the model training is not a one-time process as you know, there's a feedback loop which happens. And that's what happened with TAY. Like you did, it's actually a learning exercise and if you do mini batches or you do online learning, that's where you have to be careful about it. So TAY was trying to learn on the TAY was that bot which went all xenophobic and racist and dissimilated. So it's actually, yeah, it gets quoted a lot, but you know, there's another horror story where Google trained their photo program to, you know that, right? So I'm another one of the Peters. Yeah, maybe we'll use again, right, yep. Yeah, well, no, I know it's, I always like to remind people too, this is like, they bring that up. I do the exact same thing. I said, you know, there were a lot of other very public failures of the technology. Well, it's why one of the, I think of things like Project Cortex and I think, hey, it's wonderful what it could be doing. For those that aren't familiar with it, you know, definitely go take a look at it, but it's the idea that it will, artificial intelligence will go and identify all of the like topic cards based on the interaction. It will go and auto provision, essentially a Wiki site around a project or a topic that's essential to your organization. Well, one of my first questions, one, it's still in private preview, so none of us have actually seen it. I know people that are actively working within those, but it's, you know, it's something that is, you know, we're waiting for, I believe later this year, we're gonna see it all generally available. But my question is what's the curation process? What's going to be my, as an organization, as the owner and IT, as the management team, what's the oversight of what's created before it goes live to make sure it goes to that trust question that you just brought up? I know it can go and functionally go and build this, but it does need to learn. Any of this technology, it has to learn. There's going to be corrections to be made so that curation process is essential because AI run amok has again and again and again proven that there are problems, those biases come up, whether it's based on the data, the biases of the individuals that coded it and they just didn't think of those, things that are, you know, you have biases that are cultural, if it's built by an American and isn't thinking about the global impact of some of the decisions made around that. So that curation process is essential. So that's always governance, you know, that governance is key. Yeah, and Microsoft has done a lot of initiatives around that. So there's a whole data card initiative there. You can actually see ingredients of the data. So whatever is being published, you know, you can see that this data was acquired from there. What kind of data set it? I'm sorry, I was just getting another call. Yeah, no worries. I have to cut it off or something. So that's around the data sets where you can actually see where the data was received from, also where the data was, but what kind of data diversity is in that data set, right? So that's an important thing. Microsoft has this whole transparency initiative around this where they talk about the data being, so the systems are transparent so they can explain themselves in a way a human subject matter expert can understand and interpret the decisions they are making and it keeps an audit trail of these things, right? So these are some of the key principles where AI governance has to be there, right? So systems are fair. So fairness is an important entity where like, what is fair? And then I don't want it to start becoming like a debate around affirmative action, but there are certain biases like you said are already negative and they are existing the data set. So the problem with machine learning, the system is that it can perpetuate the bias. So from a human bias perspective, you can train humans and then put standards around this to say, okay, you cannot discriminate based on gender, you cannot discriminate based on race or faith or sexual orientation or protected classes, but if these biases get ingrained into the machine learning systems, they're perpetuating and that's really dangerous. I mean, this is a terminator scenario we have and a lot of people talk about it. I actually have like a whole set of books right here, exactly on this topic. I'm working on some specific point of views around it. This is one of the most revered books I have. It's a weapons of math destruction. It's by an author, she's a math PhD and she, Kathy O'Neill, and she talks about this. So this is a brilliant book around it. There's one called Technically Wrong. I guess I'm gonna show off my library over here. So it's for those that are watching or listening into it. I do require all of my guests to bring suggested reading material. So I appreciate it. Yeah. That's right. I do the same thing, so yeah. You do? Yeah, I love the, it's good stuff to share. So it's like, this is very interesting. I've got some oppression. She is actually like, I started this thing called I've got some Justice League and it's a very interesting concept. She was one of the original outspoken people against the algorithmic bias. So going back to the Microsoft's role in that area, right? They talked about the bias and then how you can make systems fair. So the key things like transparency, ensuring the security against adversarial attacks, but that's a very common theme now, that adversarial attacks happen on these AI systems and they can cause issues. So how you will make sure that doesn't happen. The latest scandal in AI, I don't know if you're following it or not. There's a paper published called Pulse and the idea is you can have a small thumbnail and if you extend the thumbnail, it will make a proper picture. Based on the most closest picture it can be. So it was trained on the data set which was predominantly white Caucasian. So when you take a small thumbnail of President Barack Obama and expand it, it became a person, it didn't say a person of color, but it became a white person in that picture, a white skinned person. So that blew up as a big issue that this is exactly the problem with Odyssey Intelligence. You guys are completely doing this wrong. You should have done that. So one of the leading researchers, Turing Award winner by the way, he won the Turing Award last year with Joshua Benjio and Jeffrey Hinton, Yan Likun spoke out about that and Yan Likun said that it's because the data training data was predominantly Caucasian. If you do it somewhere else with a different type of data set, it will come out different. So everybody thought of that as an excuse saying, oh, you are just blaming the data, you're not blaming. And I agree, there's controls need to be put in place for this. And I think because AI and machine learning is in an infancy and it's starting out, there needs to be something to put in place. I think a lot of people speak about this problem where Google released their work to work model where what happens is if you ask for some of them, if you ask, let me quiz you on that, okay? So blue is to boy, girl is to... Pink. Pink, there you go. So model learns from the data set, right? I know these are kind of questions you'll have to think a little bit more about. What do you answer them? They get very tricky. So these seven names are already learned by these models because they are trained on large data set and they see the frequency of occurrence of these texts very close together. So they're like, okay, this is our work. But it becomes bad really fast. For example, man is a doctor as woman is true. Nurse. Right. And I'm doing it and just folks, I'm not doing that because I believe that it's like, look, all I'm doing is repeating what the stereotypical response is for that. Exactly. Exactly. You have set me up. I did set you up. I think this is a great point because I'm good friends with the founder of a company in Redmond, Washington that actually works very closely a lot of their projects with Microsoft and their core technology is providing the data that goes into all of the beautiful demos that you see of whatever the products are. So they've created technology to create vast amounts of data and populate like when the Microsoft Teams launch happened and they're showing all these demos and being able to show you analytical reports to show that it was actually has been used for six months and all this kind of stuff. It's not, hey, it's not 120 people that have for the last six months done a bunch of fake data within Teams just so that they could provide a pretty report for this demo. No, they went and they populate this data but I could see how, here they are populating demo systems they provide this as a service to technology companies how they're populating, they're extending a lot of those biases just through that automation. It's just something that they need to go in and think about, hey, we need to make sure that we are truly providing a globally consumer global service, they're limiting themselves but even with their customers, Microsoft needs to make sure that the data that they're using in their demos fits their broadest of constituencies of their global company. Anyway, but it's fascinating when you think about that. I think sometimes my opinion here is that we so quickly jump on the case and attack the people like in this example, you're the companies that go and attempt to build this new technology and the biases are shown. I think that there needs to be some room, some space given to companies to recognize, hey, we need to go and make adjustments and fine tune this. We had three white American, one Asian American, code this and don't get upset if it doesn't fit in every geographical constituency of who we're trying to market to. That's who built this product. Now we need to think about how do we need to expand that? Yeah, I agree, safest space to experiment. You have to have a safest space to experiment. Otherwise, what's the alternative? People are not gonna stop experimenting and building new models. What you're doing is just putting them in a place where they're not going to publish the results and then people who are going to start using the models, it's essentially making it back alley, experiment patients, which is, and imagine like larger companies have the capability to actually do these experiments and publish reports, but the smaller companies don't have that capability to actually do all of those experiments or even publish the reports around the diversity of the data, et cetera. So having a mechanism around that is important, as important as you have the other aspects of this. So I was going to talk to you about AI governance standard, but one thing you brought up is the building of dynamic taxonomy with a project cortex. I think that's a pretty fascinating idea. I'm looking forward to getting into the public data as soon as they come into there. Yeah, I just, so what was the name of the failed Microsoft AI? What was that, the person's name? Sorry? TAY, T-A-Y-T, Microsoft TAY. Oh, TAY, that's right. So I hope we don't have another TAY experience with it going live, but it might be why it's taking a little bit longer since we've been hearing about it for a year, year and a half for it to come out, but yeah. But that's why, again, I go, my default is, what is the governance, what is the curation process and what is change management going to look like for that solution within an enterprise? So I'm excited to see it and kick the tires. I think that we need to, you know, kind of temper our expectations around these things. It has to take time to learn and adjust. And no matter how robust it appears when it becomes generally available or expanded preview of that, whatever the next step is, it also has to learn from our usage of it, our organization. So I think to your point, I mean, it may then go and uncover things that are the norm that are part of the culture of our organization that we need to change as well. So I think it's fascinating stuff. I think we just need to be more lenient or understanding and have a little more empathy for the organizations to know that, hey, sometimes when, here's a great way to bring this up. So SharePoint experience that come from that background when Microsoft acquired Fast Search and started incorporating the technology into the platform. SharePoint 2010, when that finally came out and was incorporated in, it was in a separate server license, it was actually the technology was integrated into SharePoint, the general release. We heard from customers all the time, there's like, well, search is broken. As we would go and work with them to try and uncover what's happening, search wasn't broken, it was starting to work. And because it was working properly, it was now surfacing poor data management. It was surfacing poor taxonomy and metadata usage so they were sloppy with their data and it was uncovering that. And so they had the cleanup permissions and cleanup taxonomy and metadata and all kind of all those different things. I think that Cortex and this AI technology is doing the same thing. It's not that there is some organizations going out there trying to be racist or unethical in its behavior around the data. I think we're going to need to further refine our systems and our data so that it is more accurately representing who we are as companies. You bring up a very excellent point. I think that's, I usually say that the doubt of time in chief reference is that AI is going to make us more human because it discovers our biases, right? So you will see the civitism use case. So that's a huge problem. And that's why I can see the point of view from other people when I talk about space is to be able to research and experiment. And civitism is not definitely not a place where you will be playing with this kind of things, right? Because that is playing with lights. Like who is going to get parole and who is not going to get parole. If that is going to get decided based on a machine learning algorithm, we have to make sure that there is no not much room for bias in there. Because that's a, who gets to go to college? Who gets to get a loan? Who gets, so there are some life changing high impact decision making processes which where we have to do our ultimate best, right? But when it comes to movie recommendation it's where Christian really likes the rom-coms and he gets recommended a horror movie then that's not the end of the world. But if I am being recommended a specific hospital because I'm a person of minority or a color and that actually happened in New York is a large healthcare company who was sending people of color to a specific hospital and not getting the right treatment. And that's that, right? So that's where the AI governance is definitely needs to be there. So that's the point of each other organizations could bring into play. But the point you brought up in diversity and inclusion I think from the area you specifically work on in terms of communication and coordination and how the inter-organization communication work and what are the tools and frameworks around that I think B&I specifically in the framework AI can be a huge, huge application. So for example, thinking about how do different people incorporate is there any sort of the quick-making or the conversation or issues associated with bias? For example, if people are being allowed to express themselves in the way they want to express and otherwise how the organizational hierarchy looks like. So that's, there's an organizational hierarchy and there is a real hierarchy how it's portrayed within the organization how people network with each other and how we can improve that to bring it more openness to that environment. So that's about I think areas of interest, yeah. Yeah, I think beyond like the biases and I think it's an important, I guess it's a D&I topic but it's just a management topic, better management. I mean, it's just a fact that there's a lot of people that are managing other humans that should never have been and put in charge of other humans. Just because you're technically competent, you're the best, doesn't mean that you should now manage human beings, other people. And that's kind of a hard lesson that had been learned in a lot of organizations. But where you can get guidance, where the technology can actually help, where if it starts to understand, let's say I'm a manager with 10 direct reports, if it starts to, based on the work patterns and the communication patterns and the style and the language used and it can identify and it understands that, one of my direct reports is more of an introvert and is more data oriented. And another person that is, it seems to be less detail oriented but is very collaborative and idea driven in that side of it, that you can't measure those people the same way and find success in managing that team. That you have to, it's based on that relationship. But the more, and I'm not saying that AI will ever replace the need for the human interaction of a manager and direct reports and the relationship aspects of it, there's a lot of dynamics there. But the more that it can go and pick up on those nuances and be able to provide suggestions too and difference between and automate on that, I think we're gonna have happier teams, we're gonna have more higher functioning organizations, the more that it can provide that kind of practical help for management, for organizations. I think it's fascinating, it's a, and I know that we're kind of over, I do have a final question for you which I know that around this topic, it's all in a lot of people's minds. I'd say most people that are watching this have this, is when will robots take over? Is it five years from now? 10 years, when will AI become sentient? When will there be like that moment? What is it called? Is it the moment when it becomes self-aware and then it destroys all of us? How far are we from that? I'm working in my basement on that project. So that's a question I actually put together put to the, I think that's a NURA IPS conference. It has a historical name which is not appropriate but that I always use to refer to that, but it's called NURA IPS, it's one of the largest conference in the world. It's for NURA, and I think the attendees were asked this question. So artificial general intelligence is what you're referring to, AGI. So AGI then would become sentient, I guess is a, then they can do general intelligence part of the thing. So people think like about 35 years or there's a whole range of different questions. Right now, it's basically automation on the steroids. So what we are doing in AI machine learning is automation on the steroids. But I hope that by that time we have the fairness, the reliability and safety, the privacy and security, inclusiveness, transparency and accountability. These are the six key principles Microsoft has for AI. I hope by that time we have all of these in place so we don't have to be scared of them taking over, but rather having human in the loop. So building upon what you said earlier is in terms of context of how humans interact and if somebody's an introvert versus somebody's actual word and what kind of medium of conversations that should be provided. Automating that with human in the loop would be really helpful. The human in the loop is the ultimate judge of that thing because if you create measures around these things without having that human insight into that, I think from social aspects, it becomes a recipe for not so good things. But in general, I think that's why there's a fairness. The six principles Microsoft has presented, I think that these are really important to recreate in terms of having the fairness that it treats the people fairly reliable and secure. So it doesn't act weirdly when there's adversarial attacks. It's privacy, it respect the privacy considerations you put in there. It's inclusive, it's transparent. So there's no black box algorithms around there. So people can actually use that and it's accountable. So in that case, the whole apocalypse, you're predicting wouldn't happen, hopefully. Yeah, that's right. All right, so well, if it happens, I'm coming back to you and I'll, you know, what's going on here now? There are a lot of smart people who are actually, say that this is one of the biggest threats including the Gates, including Elon Musk, including Stephen Hawking and they consider it. And I think their consideration and challenges they propose are real because they think about the, from autonomous drones all the way to having the bias propagation in AI, there's tons of different bad use cases around there. There's again, also tons of all good use cases around is in humans, we are perpetually optimistic. We try to find the good and we are also very, not very forward thinking either. Well, that's why if people get a little too overly optimistic and excited about the technology and hey, we're gonna live in a life of luxury once it's all just kind of running in the background. One, you don't understand human beings if you think that's ever gonna be the future. But two, just go watch that episode of Black Mirror with the robot dogs, the autonomous dogs. That'll freak you out enough to say like, no, there need to be restraints there and what we do and ethics in AI are certainly important. So, well, Adnan, really appreciate your time today. People wanna find out more about you, get in touch with you. What are the best ways to reach you to find you? My trigger handle, Adnan Masood, you can be a member there or you can adnanmasoodergmail.com. That's the best way to reach out to me. I'm on LinkedIn, active blog where I can reach out to me. But really glad to be connected with you, Prashan. Really glad to become a really digital director and a part of a smart people community like yourself. Yeah, welcome. Despite the fact that I'm in the program, it's a great program to be a member of. But it's, yeah, so, yeah, just get ready to drink from the fire hose with those DLs. It's a daily thing. Oh yeah, yeah, get ready. Okay. But anyway, well, thanks a lot for your time today and we'll talk to you soon. Thank you very much, have a wonderful day.