 I'm representing the mass research, which is an initiative we have taken earlier as working with Microsoft and I spent a decade with IBM as well and there was a researcher at Indian Statistical Institute and I started my AI journey in 2002, like 16 years before. So that time the library means only books, I mean not Python libraries and all. So these open source technologies and all those things are not there. So we used to write codes from scratch in C sometimes like hash include stdio.h, if you can recall, and then writing codes for neural network, artificial neural network, and it was like almost 2500 lines of code. Probably right now we can use a single line or two, three lines and do the same thing and more important thing is I mean when I started this kind of data set also, right? I mean now we have a lot of online data and you can download few things, it was not there. So I used to collect primary data, like when I was a researcher at Indian Statistical Institute, I requested my friends and colleagues, can you please write 0 to 9, 10 times, like 10 0s, 10 1s and till 10 9s in a grid, on a grid of like 10 cross 10 grid. So each individual wrote 100 characters, numerical characters and then I wrote another program to automatically select that bounding box and extract those handwritten characters and finally made a data set of almost 10 lakhs different characters. But the thing is like now probably if you have MNIST data, it's just five minutes job, you can download it and use it. But the part what we, I mean we are missing now, since we have a lot of things available either in the internet or in the many forums where we can get those access, the tendency of research, I mean thinking from research perspective like solving a real world problem is becoming less. Probably we are more focused on application, quick solution, some business objective we have to meet like, okay let's do it and plug and play and so that we can have like to deliver something easily, right? So what we are doing here, I mean the talk is all about the AI ecosystem. So we are trying to build the ecosystem on artificial intelligence and data science and related subjects which is driven by the data scientist across the nation. So that is the objective, right? We try to again, I mean encourage people to do research, to think more deeper way, not like using a third party library rather than can I write the same code for this whatever I can get it. Now you may think like that's a reinvention of the will, but we encourage people to reinvent the will as well to understand the engineering of will so that they can also invent something which is better than that, right? Or they can think about like more innovative approach or more optimized algorithms from their own rather than depending on a black box which I don't know what's happening inside. So if you see like this must, right? This is an NGO, completely non-profit organization. We set up with bunch of data scientists and data science enthusiasts across India. We have almost 300 members where we are working on different research projects. And since in like data science or machine learning, AI is a lot of jargons, right? Even cognitive computing. Now, sometimes I have seen like people are confusing, it's AI or not ML or it's ML or not AI, right? Or is it data science? So I mean it's like all, sometimes it's linked and it is superset and subset kind of thing, right? So machine learning is a subset of artificial intelligence. So we may not use machine learning at all in any AI project. So it can be a very complex rule based as well. But we have to define those rules, extract those features, to identify the variables, and again use some statistical method. But probably we are not training that data using a, I mean, training that model using a data set. So then it is not a machine learning, but still it is AI. Have you heard about IBM DBLU? Anyone? The chess playing computer, right? So which defeated Gary Kasparov who was the grandmaster that time. I think it was 1997 or 1998 that time. It is a supercomputer, definitely. It can think before human being, but it's not an ML model. There was no machine learning model inside DBLU. So it is a complex rule based and definitely all kind of permutation combinations of different moves of chess was told. And then it was optimized in a way so that whenever Gary Kasparov was thinking, and DBLU was thinking much before that. I mean, what would be the next possible 100 steps it can think? And then takes the best. So the best move. So that is the reason. So it's not about, so let us not confuse with all this. Now, what is ML? When we are having like a data set, we are trying to predict something which is unknown. It can be like y, where it is a dependent variable, and the different axis, which is the independent variables. So it may be like a simple example of height versus weight. We are trying to predict one another. And we have different kinds of data. So whenever we are talking about data science, it's not about only this ML or the superset AI. It's also having different other subjects, like sometimes linguistics is required whenever we are applying natural language processing. Physics is also required whenever we are talking about something on the image processing as well. Because we also need to know about the physics of light. Because image is nothing but a matrix. Whenever we use the RGB component, the red, green, blue, and the width and the height, it's a 3D matrix. So each and every pixel has some meaning. So we have to also understand those things. And the business domain. Suppose we are working on health care. So what is x? What is y? We have to also think about it. And those are not like in real world. No one gives you data and say, sir, a model fit, right? So there are a lot of missing data. There are a lot of unstructured patterns. Those are not at all defined. But what we have to do, we have to understand what we need to solve. We need to discuss with the experts as well. So suppose a health care project, a cancer research, I worked earlier. So I have seen a column called DOD, date of death. And that was the first time I was handling a data set having a DOD column. And the objective was to predict the patient's lifespan, I mean, cancer survivor. And so they are having CLL, which is chronic lymphocytic leukemia. It's a kind of blood cancer. And I was not aware of anything. Because a lot of medical jargons, Greek words are there. So I used to discuss with those oncologists, the cancer specialists, to understand all these things. Because otherwise, those data sets, whatever is given, the raw format was not so easy to fit a model. Forget about the model. What is the variable name is not clear. So all these things, it can be structured data, like a tabular format. You have already the columns are defined. It can be semi-structured like a log file or some signals from IoT device. Even think about a smart fridge, which can have whatever vegetables you have put. And you open the fridge door. You can see the light goes on. And then once you close, that is off. So in that time, if you take a photograph, and then next time when you open or close, take another photograph and see the difference. So there can be three things, three states. One is you have added something. Second is you have removed something. Or third, you just opened and closed. You have done nothing. So three kind of. So from there, we can have a project. Like can we identify like, okay, my tomatoes are going short. I have to buy more. So now, so that can be like from that sensor we have to put, or a camera, and multiple other things for the IoT. And the typical unstructured data, like where we use natural language, computer vision for the images, speech technology, and a lot of projects. And I think most of you are aware of this sentiment analysis, right? From Twitter. I think most common NLP project. So how many of you have Twitter account? All of you, okay? And how many of you have tweeted in last week? Okay, 25%. How many of you have tweeted today? And how many of you have tweeted on ODSE? Only one in this crowd. See, this is a technical crowd. All of us are related to data science. And we are expecting that we will get a lot of insights from Twitter dump. But only one person in the entire crowd has tweeted on ODSE. So how can I just assume the entire nation is tweeting and they are also revealing whom they voted and we can predict like who will become our prime minister in the upcoming election, right? So this kind of assumption or hypothesis. Sometimes we assume a lot of things, but in practical it is not so easy. Same for sarcasm, right? I have seen in many retail, online retail, they have put like awesome, you ruined me. I lost 5000 bucks. Now that awesome was a positive word, but that was from frustration. And that person also gave like five stars. So this is one thing like we have to also understand the patterns of data. It's not like blindly let us use something just to use like, okay, I have to use Naive Bayes or you know, neural network and because all of these things are there, data is also there. So what you are doing, I mean, are we just connecting few things and getting the output and then there is no need of data scientist, right? That anyone can do, even a 10 or 10 plus two guy, if I say like, okay, this is the data set, this is the model, you just connect and get some result, right? So we have to think from other perspective as well. So in mass, we are focusing on cognitive computing. So which is called like how human things or we are trying to correlate from the humans behavior, humans decision-making power and trying to simulate those things. So we have implemented lot of and demonstrated as well, lot of projects on problem solving. So that we found like it is more complex from taking a simple decision. So even if I ask like, should we have a tea break now? Now the answer can be yes or no. So typically it's a binary classification, right? Based on several parameters, you may think like this lecture is so boring, let's have tea break. So then the answer is yes. Then you may think like we just had lunch break, right now you will be heavy. Then the answer is no. And sometimes we think and take decision, we change our decision also. Probably in exam hall, right? So multiple choice question. You have put right answer for B. And after some time you realize, oh no, no, I think the right answer is C. So you remove the circle from B and put a circle on C. And then you submitted your exam paper and then realized, oh no, no, I think B was the correct answer. Happened, right? So now there was no external input. It was only purely based on your decision. You only thought, you only took the right decision, wrong decision, again right decision. What happened? So same here, right? In neural network also, we have to think about the optimization part, the thresholding part, right? When we should stop thinking. Don't think too much. We also say, right? I mean, take some random decisions so that, and which would be like optimized in a certain point of time. So for that, we have to experiment a lot of things. And we are using this project program, you can say, which is like a set of projects on different unstructured data and how human takes decision out of it. So definitely neural network is one of my favorite algorithms as I started my career with ANN. And now, again, I'm happy to see like a lot of people are working on neural networks using the deep learning technologies like the CNN, RNN and DNN, all of these LSTM. So where we are primarily trying to get more advanced version of neural network, right? So the projects we are thinking like it should be like adaptive in nature and definitely interactive, iterative and stateful as I mentioned like when what's happening next and based on the next step, probably our decision may change. So suppose you are feeling very cold and you may think like, no, no, no, let's take the tea break now because this room is very cold and having a tea probably we can grab the tea also instead of taking a break. So we can have like sometimes a fuzzy decision as well. So tea, tea, chalta, hay, kind of right. I mean, not exactly binary. In our life, everything is not binary sometimes. And contextual, so as I mentioned, so it is very much dependent on the domain where we are working on. So suppose we are solving a healthcare, as I mentioned, it can be like Telco. So in one of my previous project, I delivered one model of churn customer prediction for a Telco client. So, and it was like more than 90% accuracy, the precision was high. So they mentioned like, can you decrease the accuracy and increase the confidence? So as a statistician or a data scientist, they're requesting me to decrease accuracy. Increasing confidence is fine, but why they are saying decrease the accuracy? Then from the business side, I found, so what they will do with those predicted churn customers? I am predicting like 30 out of 100, these customers may churn out. That means they will leave their service. So knowing that futuristic statement, what they will do? They will try to retain them. To retain them, they have to give some offers. And for each and every offer, there is a cost which is related. So they are saying instead of 30, if you predict 20, but those 20 will actually be churned. I don't want like 10 more, which may not be churned, probably they are non-churned customers, but I am giving them some offer where the business is being impacted. So this is like the domain, the context. Those are all important where it may vary from location to location. Even for Telco, we have seen like the customer's behavior varies from state to state. This is circle. So from circle to circle, the different kinds like download of Bollywood songs. So it varies from state to state or like crickets core update. I am talking about some non-smart phone era. So only feature phone. So those SMS based services. So these are interesting where you have to think about. So now going back to the AI community, what we are trying to do is more towards like problem solving. So I'm not talking about like the churned customer or any business thing. Probably these problems that I will be discussing now may not have any business impact. No one will buy these products. So first one is on text and numbers. So when you have some problem, it is not always a problem. I mean, in a discussion, we calculate. Probably we got to know one of our friends got a job in U.S. And we also got to know his salary in U.S. dollar. So first thing what we do? Multiply into 65, right? To get to know like INRM Kithuna Milray. Okay, so no one told us like to calculate that, okay, you have a math exam. No, nothing, right? But we still calculate. The petrol price is hiked by 10%. So first we do that into 110 divided by 100. What is the new price? So anything, when we calculate in mind, it's not about we are solving math problem, but still a statement may have something. Or this is like Pakistan has 10 more nuclear weapons than India. Now, what is Pakistan's actual number? What is India's number? We need to calculate that. I mean, it's not about just one number we found and we are happy with that. We always, we try to compare, we calculate and doing things. So first experiment we started with like solving simple arithmetic problem, okay? Take like standard two, standard three math problem. So like this one, Adam had five apples. He gave three to if, how many are left? What is the answer? Adam had five apples, he gave three to if, how many are left? Quick, quick, two, two, okay, cool. So let's think about from the computer perspective, okay? So I'm using few parsing technologies here and dependency parsing, which is very common in traditional classical natural language processing, not used in deep learning method, but classical NLP always has the POS tagging, parsing, dependency parsing, semantics and all. So from there, even if I see like the first sentence, had Adam had five apples. So Adam is a subject, a pro con noun, had the verb. Then five is a cardinal number, which is written in a digit. Apples is a plural form of noun, okay, done. So we stored a number, we put that in our accumulator, in some variable, in register, in our brain, right? Then he gave three to if. So first of all, who is he? As a human, I can just correlate with like he is Adam, but through NLP, if I go, I have to use anaphora resolution. So it is called like when you connect this pronoun with the previous nouns, then three what? So three is also written in THRW, okay? But you converted that three to digit three automatically. But I have to teach the computer to convert this. Then three what? Again, we assume three apples. That was also not there. So where I got that apple from the first sentence? So this is called ellipsis problem. So when you have a missing word, you can connect with the previous paragraph or sentence, like, okay, this may be the word. Then if another entity comes, third sentence is the most ambiguous, how many are left? Most of you said two, because you assume this sentence is how many apples are left with Adam, correct? You didn't assume the question is how many apples are left with Eve, or how many apples are left with Adam and Eve both? Or how many apples are left with Tom? Quite Tom, right? No one thought, right? Everyone thought Adam, why? Because again, in our brain, in the accumulator where we used our registers, right? So we put Adam first in a stack or whatever. I mean, the variables are started correctly, right? All these characters are coming one by one. So think about that. And we have started thinking on those, right? How human solves a problem? And then we actually had implemented this, which is working very well for simple addition, simple subtraction, division, and multiplication. But again, you may think like, what is the usefulness? I mean, this is a very simple problem. Why I need to write codes for solving this, right? So one thing is definitely to learn, learning these methods, like, how these simple things can be also solved by computer. Second, it's not so simple. If you see the other problem. So it is said a 6.8 kg toboggan, probably you cannot read it, I'm just reading it out. A 6.8 kg toboggan is kicked on a frozen pond. It acquires a speed of 1.9 meter per second. The coefficient of friction between the pond and the toboggan is 0.13. Determine the distance which the toboggan travels before coming to rest, okay? I'm not going to ask the answer here, but how many of you think like this is a difficult problem? I mean, compared to the Adam and Eve, difficult, right? Not so easy to give the answer, right, how we did it for Adam and Eve. But it is not so difficult if we go through, again, the same method. Let us digest, how many of you know what is a toboggan? Anyone knows what is toboggan? Okay, so it's actually used in some amusement park, so if you have a slide, you can actually lie down on that some curved, like, boat kind of ship, and then it slides. The same thing, that is called toboggan, and it is being used in some skiing area, like frozen area and all, so you can slide using that. Now the first repulsion happened with the word toboggan because it's not apple. It's not familiar to us. So we thought it's a hard problem. It's a tough problem, and there are multiple students who think like they are poor in maths. It's not about poor in maths, because maths is nothing but the logic that we are trying to implement. But probably they haven't heard about the word toboggan and thinking like, I mean, it's a hard problem. That is one. Second, you forgot the coefficient of friction. Who has remembered coefficient of friction? Again, I'm not asking the exact formula, but you remembered what is the coefficient of friction, good. So again, it's a formula, and if you put all those known variables, you can easily find the unknown variable. So it's not a hard problem. So what we have to do, we have to go through, again, using NLP. Now, it's not about only giving and taking, it is plus or minus. Because even if we write like throw and drop, both are verb. If I use Stanford parser, it will give me, like throwing and dropping, there is no difference, because these are two English words in a continuous form. The boy is throwing a stone, the boy is dropping a stone. There is no difference between throwing and dropping. But in physics, if someone is throwing, then there is an initial velocity, there is an angle, and there will be projectile motion. So u by sine alpha will be the, so now here, while dropping the initial velocity is zero, the only acceleration is 9.8 meter per second square, which probably we remember the g, nothing else, right? So and the distance is also vertically on the same. So, but how we can attach this knowledge to those words? So whenever, that's why we said like contextual. The word solution in maths is differs from the word solution in chemistry. So if you think about like, okay, I'm just having the meaning, it's not so easy, right? So it is sometimes domain specific, and we have to enrich, we have to create some knowledge graph, a very much enriched knowledge graph with each and every word. It's not about definition, probably throwing and dropping, it's not like deep learning, right? It's simple, throw and drop, but we are still attaching lot of knowledge on those parts. So yeah, it's complex, but again, it is solved. So we have actually solved on mechanics problems. So you can give any IIT entrance mechanics problem to the system, it will solve. But we haven't done it for electrical or any other things. We have to, again, as I mentioned, right? It very much depends on the domain. Even the resistance, right? Whether it is from mechanical resistance versus electrical resistance is different. So again, we have to work on the classify automatically which domain it's talking about, and then based on that, what knowledge base has to be attached with that. So next problem where we are not only dealing with numbers and text, also few images, measurable images, right? So measurable images like charts. There are multiple images which is probably we don't measure perfectly, but we can get some idea. Like, I have seen some banks add, they have, they had an elephant, big elephant. It was playing and football with the human being. And then a small elephant calf also joins. And their objective was to say, we are serving different kinds of customer, not only big customer or something like that. Now that is also measurable. Have you seen some health drink, like Complan or something like the human figure? Like after one week, two weeks, three weeks and taller. So that is nothing but a bar diagram, right? So instead of rectangles, series of rectangles, they have put human figures. So all these visualizations sometimes we measure. And my objective was, suppose you don't have the table, you don't have the raw data, you got the visualization only. Just reverse engineering. So whoever has like presentation on visualization, they are talking about like, okay, we have this data set and how to represent that with good visualization, right? So that it should be insightful. Now our objective is just opposite. Now we have beautiful visualization. We don't have the raw table. Can we extract the data automatically using those images? So again, it is a solved problem. We actually solved it together. Like measuring the number of pixels. So what is the bar diagram, right? Or line shirt or it's like, you can measure the number of pixel here. So the height of the bar and then compare with the levels. So suppose something is in between 30 and 40. Again, as a human, you can think it is 35 if it is exactly in the middle of 30 and 40. But the number 35 is not written in the chart itself. So I cannot use OCR, Optical Character Recognition, to identify or read that 35. I can use OCR to get 30 and 40 or 0, 10, 20, 30, 40, the scale. That I can use OCR. But again, comparing with the height of the rectangle and it is 35 or 37 or whatever based on the number of pixels which is aligned with the scale. So then we can go back to the original table which is missing there. Why we are doing that? Probably I may have a question and swing system and someone is asking what is the revenue of this company in 2011? Now that is not mentioned in the text in that document. So that is only mentioned in the graph and which became an image now. And now, if you have to answer, you have to read that image. So we are doing that kind of problem. Again, it's a problem solving. If you have think like data interpretation problem, right? So you have a table or chart. You have few description above and a question below. Typical statistics questions. So I think in many entrance exams, you may have faced this kind of data interpretation. So we are also trying to solve that kind of problems. So next is vision. So not only like text and in pure vision also, where we are talking about, like again problem solving based on different computer vision programs like Sudoku puzzle or a difference between images. Like find a difference between these two images or solving maze puzzle. Again, no one will buy this solution from us because no one wants like a robot will solve the Sudoku puzzle for himself or herself. But why we are doing this because probably this solution can be implemented also for other security purpose. Like car number plate, right? Vehicle number plate and we can automate a few things. We can use it for reading any other text like a banner or something like that. But why we are using and mostly focusing on problem solving as I mentioned because we are not only extracting like, okay, this is what number. Also trying to solve the puzzle. How human things. So that is the objective. And again, as I mentioned, most important objective is to learn. Learn how we learn, right? So again, a lot of feature extraction is required. And as I mentioned, so we are not relying too much on deep learning now. And rather we are breaking it down into components and going back to the low level programming so that we can also understand like how deep learning works. So how CNN works, if you want to know, we cannot just ignore the feature extraction part which CNN does automatically. But as a human, if I identify the features and if I want to simulate that method, that is what is being used in CNN, right? So it uses like multiple pixel and then cluster of pixels and then you apply neural network on each set. So here we have used like a lot of statistical methods to extract those features like whether there is a loop, whether there is like what is the first black pixel hitting from left or right or top or bottom. In that way we have like 64 different features and then wrote a program on neural network, simple ANN, back propagation algorithm with hidden markup model. So other things are like not on the, as I mentioned, it's computer vision or NLP. We're also focused on IoT robotics. So we are calling it like embedded intelligence, EI. So where, so most of the robots are now like focused on the mechanical side. So what we are doing, we are also combining the AI algorithms. We are trying to burn those algorithms inside the chip level. So and then it will actually react based on whatever it's capturing. So from computer vision perspective, we have identified like the robot identifies the person and greets that person. And it also says like whether I mean what kind of conversation I can have. So for an example, like any conversational bot or the personal assistant like Cortana, Thiri or Google assistant, right? So those are all individual, right? So you're using it either on mobile, which being used for you. So when we are talking about personalized thing, right? And we have to also think about like how much it is personalized, whether it is trained on your data or not. So apart from the speech recognition or face recognition, we have also identified like the way of talking may be different. So we are customizing the speech based on whom it is talking. So for an example, when I am talking to my school friend, when I'm talking to my wife, when I'm talking to my boss, when I'm talking to my junior or senior, my statement changes, the way of talking changes. The same way we are trying to generate the speech automatically based on the persona. And think about a device like Alexa is sitting in your house and you have like six or five members, right? Including your parents. So that device is not for one individual. So it can actually interact with multiple personas based on their gender, age, taste, and it can be used like we can use reinforcement learning to understand that person more and then interact. Then we can think about like, okay, this is personal. This is a personal assistant because it is customized based on the user. It's continuously learning. So we have done a few things already. We have made robots, physical robots, not like chatbot and which can identify you, greet you, and based on the person, it will start discussion. Also it can give like, okay, I want to give a message to, suppose, Srijak. So I will tell that robot, like please pass on this message to Srijak. And in between, like suppose Aditi comes. So it will say like, hi Aditi, but it will not pass the message to her, but it will wait unless and until Srijak comes and it will identify Srijak and pass on like, joy has given this message to you. So definitely this kind of systems having like huge market impact. Everyone is working on advanced AI. Although in this kind of conference or whenever we meet, we talk about a lot of advanced things, but going back to our office, probably we are using some mandarin data cleansing job and fitting like simple logistic regression as well, right? But why we made this so that we can have like, in parallel also you can have that food, right, for thinking. So in this community, we think about like, definitely it would help the industry, but we are more focused on the societal impact. So like, whether students can be benefited, we can empower the students, we can think about like the specially able children, even the corporates and startups who are struggling with AI research, because each and every small companies, they don't have the AI wing, right? So we are supporting them mostly the startups, as well as the government and like in different state governments and central government as well. We started working closely with them so that we can help finally the entire nation and through the AI, because as I mentioned, like this is a community, it's a nonprofit organization and many people are like volunteer. Everyone is volunteer here and all of us are trying to contribute from our intellectual thought process, like all of these things. So these are the subject matters, like where we are focusing on. The last one is very interesting, right now we are focused on embedded systems and IoT, where we are also putting the AI part. So the device itself can take that decision. In fact, instead of IoT, we are more focused on whatever algorithm you want to write. You write it in the code. So the fridge, the washing machine, you can have that device itself there so that probably you have too many blue t-shirts and the washing machine will suggest why don't you buy a green one? Or the microwave, probably you are using it only for heating food, right? How many of you cook in microwave? Apart from heating, no one cooks, right? You cook, okay, that's great. So actually it can suggest also like from what kind of food you are cooking and I mean from your food habit. So a lot of things can be done, right? Where the hardware component and embedding all of these things includes like a bigger thing. And we are also focused on the AI principles. These were actually this is from Satya Nadella, the CEO of Microsoft, and I just copied those quotations. It is very good. And I think like these we need to follow and we are very much strongly follow these ethics, the AI principles in mass research club as well. And most of these things like AI must guard against bias and AI must be transparent. These two are like I think very important because sometimes we say like, okay, you don't have to worry on whatever we are using. It's magic. Or so no, I think like the end user or the customer or client or even the society, government, whoever is, I mean getting the service or the solution, they should know how it is being derived, right? It should not be like I am using a black box or a third party package or I cannot disclose. Second is bias. So even if you think like sometimes being human also we get biased and due to many reasons, okay? I mean like instead of scolding someone else's son, I scold my son happens, right? But probably the other guy has done something, some mistake, but we scold our own child because we don't want to be like to show I am unbiased, we become biased. Doctors, they put prescriptions on a particular medicine company. Sorry, if there is any doctor. But if I get like sponsorship or you know, from any medical representative, I may write the prescription only for those. So instead of paracetamol, I mean, sorry, instead of a medicine name, Crocin or Calpol, I can write paracetamol, then you can take anything, then it would be unbiased. But if I only write some particular brand name and if the patient thinks like I have to buy only that, then it's a biased decision. So we should also think about like how all these, I mean algorithms should be like unbiased, removing those human bias as well. Next is like we particularly focus on open source technologies in mass, but we have checked with different other technologies, particularly with Microsoft, IBM. We have tested with Azure Cloud, the machine learning and Watson, like so that it has like the good interoperability. It's not like, okay, we will be only using open source. So if you want to play around this and if someone wants like in their platform, they can easily integrate this. And as we mentioned, research here is not only theoretical research. Sometimes as I mentioned, it is not for business, but for learning or more academic research, but it should be implemented. It should not be a paper. Paper is a by-product of our research where we try to publish papers in a good journal or conference or like this kind of forum. Like you have seen in ODSE also, must is the ecosystem partner. So for this, definitely we have four speakers from this group, they are sitting here. So now, and we also had like one hackathon with Microsoft Garage, which was the first external hackathon arranged by Microsoft. Usually hackathon happens only inside Microsoft, but this was the first external hackathon where Maast was the again ecosystem partner. So we are trying to encourage and engage the people to for these new things because there are a lot of people, at least in this conference also have seen 60% are they're interested in AI data science, but they are not data scientists at all, but they are interested, they have known few things, bits and pieces, but they want to grow their career into this field. So we are welcoming them. We try to bridge the gap between academy and industry because in academy of what happens, they have good research labs. They do a lot of deep research, but mostly theoretical, sometimes publishing a paper is the end result, but not being implemented like a finished kind of product kind of thing. And in industry, we have like quick solution. I mean, okay, let's fix this first. There is a DSAT, DRI, deliverable, client, deadline. So forget about deep learning, whatever you want, like if put like just simple decision free and ship it, right? Or rule-based, put if then else, okay? So these things, I mean, now, because we don't have time, we don't, we have commitment, we have to deliver something, we have a particular budget for this period. So now we are trying to do like, okay, you're from academia, you do this kind of product development with us. Okay, you're from industry, why don't you start reading these research papers and then come and present in the next day what you learned. So this is the way and then let's start working on two, three projects, small projects, the projects, but I mentioned those are like few, we have like more than 100 projects going on. And finally to align with the intelligent India, we had given this name, which is aligned with making India, skill India, and digital India by the central government initiative. And again, we have very much opened for collaboration any with any corporate or government organization, even the academia, we have already collaborated with ISB. So in the Indian School of Business with IISC with IIT Kharagpur. So we have many mentors from there. We also mentor their students because we have members from different industry. So again, we started supporting different startups and all and we are getting like that traction, which is good. And the activities are definitely starting with the pulse creation, like what is the subject all about, what are the different sub topics, super topics and then arrange few lectures. Again, very interactive lecture, not like one person is delivering something and rest of is like, okay, it's too boring. And ideation maker events like hackathon kind of thing, then leadership connect like, instead of forwarding a resume right to someone, that person if knows like, okay, I have, I know this guy, he has worked on a beautiful deep learning project. So my organization is now looking for data scientists. I will hire him or her. That's the best process, right? You are actually convincing people by your work, not with like full of jargons in your CV from Wikipedia. So, and then implementation is very important, as I mentioned, it should not be only theoretical. Based on feedback and iterations, you can improve that like different academia we also demonstrate and evaluation of ideas that is also important. So something we should not think about like, which is not feasible, the Kalkoek drone Urayange, right? Which is not permissible. I also thought in one interesting project, doctor's prescription handwritten. Now I want to digitize it. Automatically, I will digitize it. But the first point was it is so hard to get such kind of data. It is personal information, right? I cannot ask my friend and family K, please give your prescription, I will collect. So those are all things I mean, we have to think about like evaluation and feasibility and all. And finally, we are trying to build the ecosystem with data scientists and data science enthusiasts. Yeah, so that's all from my end. And please connect with me, just search Joy Mustafa in LinkedIn. Connect with me over LinkedIn. And if you're interested, join this community. It's open, we are operating from eight different cities like Hyderabad, Bangalore, Kolkata, Pune, Bombay, Chennai, Delhi, Bhubaneshwar. So from where we are operating and if you want to connect with more people, work on small research projects, think deeper, please feel free to join and connect over LinkedIn. Thank you.