 As far as AI is concerned, when I look at my kind of ecosystem in India, and I see a lot of people who have come in from rural areas, they've come to the cities like Mumbai because, well, because of a lot of reasons, but one of them is poverty. Now, can AI directly do anything? Maybe not, but there are very many indirect ways. For example, the domestic workers who work in our apartments in Mumbai, most of them are women. They form the 50% of the population of women in India who are illiterate or do not have informal education or just haven't even reached primary education. What if AI, you know, there was some way that there were AI systems designed to help teach them some basic life skills, teach them a little bit of English, give them that confidence and awareness of their rights and a lot of things would change. And I think that would also indirectly help alleviate poverty. What is very exciting is the fact that you can have systems which can behave intelligently and perhaps even understand, make connections around what the human user is thinking and needing, etc. All of that becomes very pertinent in, again, cultures like India where there are many entrenched hierarchies, whether it's caste or whether it's class, whatever it is. But the hierarchies, you know, sometimes make it very difficult for those who are at the bottom of the pyramid to feel not intimidated when they are in situations where they could learn, where they could be exposed. But with a machine, one of the good things is it's completely non-judgmental, it's completely neutral. I think that's a major possibility we should keep in mind. There's the positive and there's also, you know, some fear and some concern. The positive side is because of the incredible kind of speed with which it can crack complex problems, whether it's just in diagnosing a particular situation, not even a disease, but a particular situation in giving a suggestion or a solution, suggesting that to the user, those kinds of things would take away a lot of that repetitive, you know, situations of trying to calculate and come up with the answer which we are not very good at, not very fast at, would take away those kinds of problems and we would be able to concentrate perhaps on using that and being very creative at solving problems around that. I think that's very positive. The thing that again bothers me is, is there a set of values that we are embedding in our artificial intelligence systems so that an artificial intelligence system never breaches some fundamental values that we all share as fundamental human values because if that's not there, then as the systems get more sophisticated and they connect and make their own decisions, who is to say that those decisions will always be conducive to us? It's not that the system has any bad intent, I don't think, but just that there is no blueprint I can see now of values that is part of the DNA of an artificial intelligence system. I think that needs to be embedded. Then I think the future is very bright based on those fundamental human values artificial intelligence systems enable us, all of us, every single one of us, to go to another level perhaps of our journey of our evolution. I think that's how positive it could be. I feel that focusing on goals like the 17 sustainable sort of deliverable goals that United Nations has is a really good way to get everybody converging on one set of things that we all want to solve. But I do feel in the development and design of AI solutions, there is a lack of diversity. I feel there's a lack of diversity in terms of understanding other cultures and contexts and their needs. There's also a diversity of professions who are contributing. I think there should be more social scientists, designers, philosophers who need to be part of the future, the way we craft solutions for the future.