 but we'll start. So my name is Anand Gupta. I've been working with Morgan Stanley for quite some time now. It's been years. Today's talk will be related to the world of finance and how we can use deep learning methodologies to it. So finance has always been a very fascinating topic. I mean if you go a decade ago, right, everybody who's graduating wanted to join some, you know, fancy hedge fund on millions of dollars. So that was, you know, the catch word, the buzzword back then, but sadly it is not now. So, but then one more area of people got fascinated to that word and that was the academia. And the reason was that the world of finance is full of, you know, equations and formulas and statistics right from the word go even decades ago. That was the case. But even today, if you ask a layman, right, I mean, that is what they have the perception that, you know, oh, you are a financial analyst. Okay. You must be working with, you know, black souls, model and model and that is what they understand. But underneath all of that, right? I mean, the fundamental principle is very simple. It is based on the concept of demand supply. So let's say you have a product, right? And there is a buyer for it and a seller for it. And now the buyer will have the bid price and the seller will have the ask price. Now these are going along, right? And as soon as there's a match, right? That thing is locked. And this is what is happening in all the exchanges around the world. When you see the prices, that is some something that a buyer and a seller has agreed to come across with. What defines a product? A product is anything as long as you can have a buyer and a seller for it. So nowadays we have, you know, derivatives on, uh, whether it will rain tomorrow or not. So you have products like that also, but will not go to that extent. So today we'll be discussing on the simplest product that we have that is common stock. So let us take a look at some of the stocks. So this is a stock on Iluka resources. This is a mining company and, uh, it specializes in some rare earth elements also. The second stock that we have is NVIDIA. I mean, everybody's aware of it. It manufactures GPUs and some system on chips. The third stock that we have is electronic arts. I mean, we all remember our FIFA and, you know, cricket days, but, uh, this company is one of the pioneers and, you know, the largest, you know, video gaming companies in the world. And the fourth, we have Bitcoin. I don't need to tell anybody about Bitcoin. The insanely high prices have made sure that, you know, everybody knows about it. But, uh, why did I display these four stocks or four products? There must be some relationship. Now if you look at the relationship, right, what I can simply say as a, you know, a layman analyst that Iluka is a mining metals company. It provides raw materials to NVIDIAs to manufacture its GPUs and those GPUs are used in electronic arts gaming consoles and also for Bitcoin mining. So can I derive a hypothesis that if the stock price of Iluka increases, the stock price of NVIDIA will increase. And then similarly for NVIDIA and electronic arts in Bitcoin, I mean, that is just a hypothesis, right? We need to verify it. So let us take a look. So if I look at the returns of electronic arts in NVIDIA, we can see some kind of a relationship, right? I mean, they are not exactly the same, but they are going hand in hand to each other. Just keep, keep this visual in mind. On the other hand, if you take Iluka and NVIDIA, they are not as good hand in hand as was electronic arts, but almost they are in the same direction. Now if I take Bitcoin, I mean, there is something wrong here. Bitcoin is definitely not in the same league as NVIDIA. It has a lot of variation and NVIDIA doesn't. So now our hypothesis is that, okay, if Iluka increases NVIDIA increases correct, if NVIDIA increases electronic arts increases correct, but the other one with Bitcoin incorrect. Now there are many companies and products and stocks around the world. There are around a hundred thousand stocks and it is difficult for an analyst to try to understand all of that. There are some other knowledge graph areas that are trying to solve this problem, but they are not completely up to the marquee yet. So we are still relying on statistical methodologies for now. So we need to have something else. So today's topic, what I'll do is I'll quickly cover through the basics of finance. So that will include stock returns, covariance, correlation, portfolio, risk return graph, and then I'll move over to the deep methodologies. So stock returns. So stock returns is nothing but the, you know, the amount of profitability that you can get from a particular stock. So if let's say I have a stock yesterday, the price was 10. Today the price is 11. So the profitability is 11 minus 10 by 10. So 10%. Now, if I do this calculation for 10 days, what I get is the one day return rolled out to 10 days. So that is what we get as rolling returns. Now this is, you have done it for one stock and, but we would like to relate stocks to each other. And that is where the concept of coherence matrix comes into the picture. You all know about variance, right? We have a vector. We can definitely find the variance. What is variance? It is like the deviation from mean squared. Okay. The same thing can be applied to multiple stocks. So for example, if I have, if you, if you take a look at this particular matrix, so we have stocks as columns and prices as rows and we apply the coherence matrix to it, what we'll get as a three cross three matrix. What it is saying actually is that, you know, the stock one's relationship to stock two is zero stock one's relationship to stock three is minus 2.4, the covariance factor. This is a diagonal, the diagonals relate to the variance of the stock with itself. And you'll see that there are repeats because it is a triangular matrix. I'll come to how we can use this. One problem that we see here is that if you look at these two values minus 2.4 and minus 5.6. So for stock three, the relationship with stock one is minus 2.4 and with stock two is minus 5.6. Can we say that, you know, it is like, you know, it has a into two relationship. No, we can't say because we don't know the upper and lower limits. So that is why coherence matrix is not a good idea. So something else came up and that was a correlation analysis in which we scaled the covariance with the standard deviations themselves. And now what we have is a neat little matrix where all the values scale from minus one to one. So now if you look at the same values, right minus 0.49 and minus 0.65, we now immediately know that this is the relationship. We can say that, okay, this is a 50% unrelated and this is minus 65% unrelated. So now let us bring back the same charts. So this will be a puzzle problem for you all. So we have electronic arts in NVIDIA, right? So the correlation for that came up to 0.115. So now if I ask you what will be the correlation value for Bitcoin and NVIDIA, what will it be compared to 0.115 less or more? Okay, so let us take a look at the value. It is actually 0.2. So now you immediately see there is some problem with correlation analysis, right? Because it is statistical in nature. What we see here visually, our mind has a lot of neural networks in place that is able to gather that information. But when we look at values, they look at only points. They don't look at the directions. So now again, I'll just wrap up the covariance and correlation part. As I mentioned earlier, you know, covariance 0.06, 0.06 does not make sense. The same value here is a bit different. So some takeaways, correlations can sometimes be deceiving. We already saw that correlations are better than covariance for making charge decisions. So now, okay, we have, we have, we have spoken about stocks. We have spoken about, you know, how we can analyze them. But what next? What we would want is to earn money out of it. And for earning money, we need to buy stuff. And we don't buy a single stock. We buy multiple stocks. So that multiple stocks constitutes a portfolio. The word portfolio self-explanatory, but we would like to represent it mathematically. So how do we represent a portfolio mathematically? So for that, let's say we have 100 rupees and we invest 30 rupees in stock one, 50 rupees in stock two and 20 rupees in stock three. So the corresponding weight vector will be 0.3, 0.5, 0.2. So this is one of the vectors that we'll be using for the rest of our analysis. So the vectors defining portfolio, the weights vector, we have already seen that returns. Now returns is something that, you know, that is from the market itself, the stocks that we have chosen. And we have the covariance matrix of the, that is based on the returns vector. So is everybody clear up till this point? I think it should be. So now, so as with any problem that we are trying to solve, we need to have metrics around it. How do we evaluate? So how do we evaluate a portfolio that I have? Is it good or bad? So we have two simple metrics. One is return. The portfolio should give me profits. Second is this is a new concept that I'll be discussing now, risk. Now in the financial parlance risk is nothing but the deviation from what we expect. And the expectation is nothing but the mean. So as if, if let's say there is a stock that deviates a lot from the mean, then we don't want that stock because we don't know at what point at how far away will, will it be from the expectation? We want a stock that is remaining stable, right? Like, you know, government entities. So, so, so that is what is the definition of risk. We have very simple formulas for calculating the risk at the portfolio level and as well as the returns. So one thing that I want to ask you is from the portfolio perspective, out of the three vectors that I showed you, which one is under our control? The weights because everything else is fixed if we have decided which stocks to buy. So now this brings an interest, an interesting concept, right? If I choose different weights, how will my risk and return change? I mean, I would like to have that, you know, minimum risk and maximize returns. That is the, that is what I want to expect, right? So what we do is we do a simple simulation on the stocks that we have with different weight vectors. And what we get is a, you know, a graph of this particular, you know, shape. The reason is that I mean, the world is, you know, cruel. So you cannot have returns without taking risk. So that is why the graph also moves this way. So as soon as you try to increase your return, the risk will definitely increase. Let us look at some practical examples. So this is a risk return portfolio for your, the three stocks that we have discussed earlier. Now if you see this particular area marked in red ellipsoid, right? This is a sweet spot that we want to get because this is a spot where we have decent return and low risk. And as soon as we try to increase the return, right? The, the return increases less, but the risk that we take up is more. So that is why this is considered to be the optimum thing. So I'll just, so now what I'll do is let me add one more stock to it. So I added more products like Apple. We see that the risk return graph became enriched. We have more points in this particular sweet spot that we have. So now this means that, you know, this is simple, right? I mean, I'm good man. I'll keep on adding more stocks and I'll just keep on simulating. So let me add Bitcoin to it. Oh, something very weird has happened. The reason is that Bitcoin had a lot of variants and because of that, the covariance matrix that it produced, right? This create all the problems. So it is not as simple as, you know, just simulating all the particular stocks. So we definitely need some other solution. So the two things that we can have as a takeaway is that for constructing a portfolio, we need two important things. What stocks to select? And the second is how much of those stocks do we need to allocate to? So these are the two determining problem statements that we need to solve. So I'll just quickly brush through, you know, the current portfolio construction. One is efficient frontier. I mean, see, finance is a very old concept. These things have been there for decades, if not, you know, just years. And these are very old. They might have been written around 1960s, 70s, if not used at that time. So we create the same risk return graph. And what did do is that if let's say you have this area of interest that you want, right? They asked you to create a tangent touching this particular frontier. And where the tangent touches it, that is your optimum portfolio. So this is a, you know, efficient frontier, the capital market line paper that we had. So now we'll move on to some interesting topics, right? We understand that, you know, there are some. So I'll just browse through the topics that will be covering why deep portfolio, what are auto encoders, deep portfolio stock selection, deep portfolio latent features and deep portfolio portfolio regic. Why deep portfolio? We have already seen, right, the relationship between the stocks are not entirely linear in nature. We need something that can hunt deeper into the data points that we have a lot of data points. But we don't want simply to, you know, you know, plot a line around it. Use deep learning to come up with a good representation of the market. Remember that chap who was, who was facing problems when we had to work on a lot of stocks, I had displayed a, you know, a man there. So he needs to get a condensed representation of the entire market to be able to analyze it better. I mean, we cannot say that, okay, we have 100,000 stocks, and we will, you know, take last 10 years of data, you know, the matrix becomes too large. So we want a condensed representation. Now, for solving this problem, right, I mean, I could have chosen any other neural network also, but I chose auto encoders. And the reason is it is simple and powerful. I feel that auto encoders are the most efficient way to get patterns out, you know, in a unsupervised manner. And for those, you know, attended the sessions for the past three days, this must have come across frequently, but I'll just go through the explanation. If let's say you have an input data, okay, and your output also is the input data, which we are trying to recreate. What we do is we pass it through layer one, layer two, layer three, and we try to see that, okay, is this recreated perfectly? And then we are the back propagation algorithm. Everybody's aware of that. And we try to minimize the error between this and this. What is interesting is this particular part. This is like, so if you remove once you have trained your auto encoder, right? And if you check up this particular section in this particular section, what you left is only with these three sections. Now, if you look right, using just this data, you are able to recreate this. Correct? This means that all the intelligence in this particular section is stored here. Obviously, otherwise it would not have been able to recreate. So this means that it is the condensed representation of the intelligence that is there in the input data set itself. And this is exactly what we are trying to leverage. So again, we have this as the input data, we have the latent features, and there's recreated data with small, you know, corrections there, but we are fine with it. Okay. So now stocks selection, as I mentioned, right, we have a lot of stocks at hand, we would want to understand which stocks to choose for our benefit for our portfolio construction, right? So for for for this representational purposes, I have chosen stocks from S&P 500, 500 stocks, and I've taken last 10 years of stock price. What I do is simply create this particular matrix where we have stocks as columns and returns as rows. So each column represents a particular stock price. For example, Tata Steel. So this particular column represents all the prices for Tata Steel, right, from a particular area that last 10 years. We run it through auto encoders. Okay. And we try to minimize the error. Now, after the auto encoder has been trained, what we now do is we try to see two things. What are the stocks that have been recreated perfectly? And what are the stocks that have not been recreated perfectly at all? You know, the two sides of the spectrum. The intuition behind that is that stocks would which would have been recreated perfectly are the ones that move the market. They are like the market makers. They represent the market better. And what are such stocks that represent the market? These are in the financial parlance known as the large cap stocks, you know, stocks, you know, which are not easy to, you know, drop, you know, with certain trading agencies, right? And then we have the top, you know, the bottom 50 stocks with the most RMSE. So these are the ones which were recreated least efficiently. So what we are trying to do is we are trying to test our hypothesis whether our output of top 50 and bottom 50 is able to, you know, get that pattern out from the market data. So what I've done is this is, you know, the risk return graph that we had shown for all the stocks. So if you look at it, right? This is the return. This is risk. An interesting thing to note here is that the risk starts from 0.5 here. Just keep that in mind. Okay? It starts from 0.5 and the return goes up to 0.06. Okay? This is definitely the, you know, the initial state. Now, what have we got out of the autoencoder? So these are the stocks that had the least RMSE, which means that they were recreated perfectly. Something interesting has happened here. If you look, the risk has decreased drastically. Risk is now starting with 0.1, but the return has also drastically decreased, which is exactly the kind of pattern we see with large cap stocks. I mean stocks which are, which represent the market better. Now let us look at the bottom 50 stocks. Whoa. The interesting part is, again, you cannot increase the return without increasing any risk. I mean, you have to increase the risk to get any return out of these particular stocks. So these are what are known as high beta stocks. So these are the, you know, the small cap mid cap, which has a 10 percent, 20 percent fall every now and then. So now using a simple autoencoder with around 20, 30 lines of TensorFlow code and open source data, we have been able to segregate this. And without any financial knowledge as such, this was just a hypothesis. It could be tested out later. Okay. There is a paper that tells that for constructing a portfolio, you need both of these. So the ideal way is to take a mix of high performance stocks like these and take a you know, combination of stable stocks. So now what we have done is using neural networks, we have segregated the stocks that we'd want to use for our portfolio construction. Okay. The second thing that, you know, the second application that would like to highlight is the latent features. Okay. What we have done here is we have made a slight twist. What you've done is initially in this particular problem statement, we had taken stocks as columns and returns as rows, right? We'll now just reverse it. We have now returns in columns and stocks and rows. And then we'll run it through the same autoencoder. You know, nothing fancy. This latent features that we get here is something very unique. I'll tell you the reason. The number of rows here is equal to the number of stocks. I think everybody's agreeing to that. The number of, is anybody has having any issues there? The number of rows here is equal to the number of stocks, but the time series data has been compressed to a smaller dimension, which means that all that daily movement of prices, you know, for 10 years, all that intelligence has been compressed to a vector space of let's say 50, which is a smaller space, right? But it is still able to explain all that, you know, 500,000 days worth of data. So that is what the power of autoencoders, right? I mean, it can, you know, find patterns, you know, the latent features around it. Now we can do some interesting stuff with this particular, okay? So now we need to test it out. I mean, I can say anything, but we need results, right? No, no, this is not similar to principal component as such. In a way, it might simulate the principal component, but they're not principal components. I will not be able to verify that. I mean, we'll have to run the PCA and then see whether the vectors are the similar or not. But we are trying to arrive at the same thing, but PCA is linear in nature. So again, it will have the same pitfalls as the correlation analysis. So now let us, let us give us as a problem statement. We need a problem statement, right? To work. So what are the stocks that are closest to Nvidia, which are like Nvidia? This is a common problem that, you know, that are faced by, you know, people outside the financial industry also, right? I mean, tell me all the users that are similar to this user. Tell me all the product that are similar to this product. I mean, stuff like that, right? So what we'll do is we'll try to find out the similar similar stocks with our original time series data, which was not compressed. So for Nvidia, we found Expedia, Walt Disney, Universal Healthcare Services XX. I mean, it is fine. These seem to be, these seem to have no relation to Nvidia. And we can say that these are least correlated. But when we find the most correlated stocks to Nvidia, we again have XL Energy, Striker, Commercial. These don't seem to be related at all. There's something wrong with this. Now let us do the same exercise with the latent feature matrix that we had here. Yeah, we'll do the analysis with the latent feature matrix. Again, the least correlated electrical systems, DuPont, Verizon, Coca Cola, Bright House. Again, we are good there. Most correlated Facebook, Broadcom, AMD Micro devices, Freeport mining, Netflix. I mean, I don't need to tell you that they seem to be related to Nvidia, right? Okay, so can you can imagine the power, right? I mean, we have not written a very fancy code. Okay, we have used open source data and we are able to gather these patterns out. Okay, there is one more puzzle that is left to be resolved. Right? You remember the correlation thing where you had told that Bitcoin should have been more correlated, but we found that, you know, the should have been less correlated, but we found that the value is 0.2. So let us check that also. With Nvidia and EA, the correlation came out to be 0.118. But if we do the same deep latent feature, the correlation came out to 0.99. So I had used this particular matrix for finding the correlation between Nvidia and electronic arts. So if you go back, right? Okay, I think I'll take time to go back. If you look at this, right? The 0.115, right? Although we thought that it is very similar to each other, right? But we found out that, okay, there was something wrong. So when we did this analysis, that particular puzzle was also solved. Now the point is that this thing, this uses of neural network, this uses of autoencoders. The reason that it is powerful is that we don't require a lot of knowledge. You know, the autoencoder itself comes up with a lot of hidden latent features that can be used. Now, all right, you might say that this is fine, man, but I still have stocks in my portfolio which are tanking and if he's hitting all time high, what do I do now? I can't just bear the loss and just buy out all the stocks, right? I mean, that is not possible. So what do I do then? So we have to do a portfolio shuffling portfolio optimization and that is a very challenging topic. Because what is happening is let's say you have three stocks with you, right? And I tell you that, okay, based on the previous analysis, these are the three new stocks that you should be invested in. I don't care what you have now. It is not possible for you to absorb all the loss, buy all the extra, buy the new stocks and it's possible the new stocks themselves will incur losses, right? I mean, you don't know. So what you have to do is you have to convert this optimization into a, you know, you know, into some problem which can be broken down into factors. So the factors that you want to touch with is lesser transactions. I mean, if let's say you have three stocks, you sold all of them, you bought the new stocks. The total transaction that we have is six, okay? We would like to lessen that. That will be our loss. Lesson number of total stocks. One also, one other option will be, you keep the three stocks that you have, you again buy the three new stocks, but that will increase the total stocks to be six. We want lesser number of total stocks. Again, needless to say maximize returns and minimize risk. So what do we do? So what we do is I've written a tensor flow code and what we just do is we use this particular loss function where we have this returns and the def portfolio. So these have to be, this has to be maximized. So we have a negative sign here. This has to be minimized. So we have the deviation. This has to be minimized. This has to be minimized. So this becomes a loss function and you can use any optimizer. And what you get is if you take the returns, which includes the returns of the original stocks plus the new stocks, the weight vector, what you do is initialize zero to the initial vectors, okay? And then if you pass with the optimizer, you'll get a nice little updated weights section. And this is what you're looking at, right? This has taken into account all the different factors. In this entire thing, right, you will see that the application of neural networks to the world of finance has been more on the creative side. It is not that, you know, you just use LSTM and solve the issue. You just use a RNN and solve the issue. It has to be more creative in nature based on your requirements. You yourself come up with the loss functions that you require, right? You can add more loss functions here. If you want it, okay, I don't want people to invest more in the technology sector. So that can be, you know, a constraint that you can add here. So for the advancements, I mean, the market map, as I mentioned, right? I mean, we want to condense all that information to a smaller dataset. So this is again a very challenging problem. And we are writing a paper on it. We'll be publishing it soon. So this is one area that we are working on. The second is, again, as I mentioned, right? Metric to be used for portfolio construction. Currently, we are using returns and risk. But can we use something else? Can we use some other regulation parameters? So these are things that, you know, we are currently working on for this particular presentation. Thank you. Questions? Hello. Yeah, yeah. So when we did the correlation analysis between Bitcoin and Nvidia, there was a positive correlation, whereas in the feature vector, it clearly pointed out, it's much more negative. How can we put it in the business language and what is being captured here? Especially, I have one hypothesis. Maybe I'm wrong. Maybe you can tell me if it is right or wrong. Yeah. So that is so that's why I'm attending this conference, right? The explainability part. So to be very frank, right? I mean, so that is a problem that we all grapple, right? At least in financial institutions, we need to explain every single prediction. That is why they don't like this, you know, this black box models, right? So one way you could think off is that the correlation matrix not only takes, only takes into account how many times the values are the same, but when the similar thing happens in the later, it does not take into account because more like time series, this rises and then this rises. Correct. That's not necessarily taken in correlation matrix. Correct. Maybe that is captured in the deep learning concept. Exactly. So that is why auto encoders are powerful, right? I mean, not auto encoders, but neural networks, right? The non-linear thing is also captured. And that is why, see, I did not have this conference for predicting stock prices. Dude, that is not a problem statement. If somebody wants to go into finance to be solved, man, it cannot be solved. It is stochastic in nature. So the problem statement that we should solve is how we can help analysts or maybe if you want to become a stock trader, what will help you take decisions? Okay, so those are the problems. Okay, I want to know that if let's say I see that Tata has risen by 20 percent, right? Can I write some algorithm which will tell me with probability that if Tata rises by 20 percent, what are the other stocks that follow suit? That will be a more, you know, better problem statement to solve rather than, you know, predicting that, okay, if Tata stock price is 20, what will it happen tomorrow? That cannot be solved. You have to, you know, add all the intelligence of the world, right? It is like, you know. Great, thank you. I had a question regarding the optimizer you showed at the end. So it could have also easily said that just kill all the previous three stocks and buy all the three new stocks, so. Okay, so, okay. I'll help you answer your question only. So what are the four factors that we are playing around here? Lessor transactions, less than number of total stocks, maximize return, minimize risk. So this third stock, right? This third, that's fourth. So this is also important, right? No, the point is that although on the onset you might seem that, okay, you will add these three instead of just remove these three stocks and add these new stocks. What I want is I want to maximize the return also. So it is possible that you have a holding of a stock that has a good risk return profile. So it will force you to keep that. Okay, but then it does not, it cannot ensure lesser transactions, right? It might still tell me that I have to sell those three stocks and buy the three new stocks. I see that is why we have four parameters. So these are the complications, right? The combinations of what we can do, we want the optimizer itself to solve it. And one more thing. Autoencoder, so basically the way. Hi, my name is Nandu. Hi. Do you or Morgan Stanley have you guys done any work on momentum trading or sentiment trading or anything of that sort? So I'll not be able to comment on that. But momentum is a very old concept. As of now, I mean momentum is, is like it was Black-Scholes-Murton model. So the three-factor model when the Pharma Fence came up with the fourth, that was a momentum factor, right? So momentum is a very old concept. It will definitely be doing that. The sentiment thing is something that everybody's trying to do. To be very frank. And we don't have direct access to traders. So I'll not be able to comment on that such. But definitely that is a problem that everybody in the financial world is trying to solve. It is not easy, by the way. Hi, maybe it's too early to ask. Have you made any money till now using this? So define money, man. I have joined the organization. I'm making money. So that is money I'm making. But from direct trading, we are not allowed to trade. So you have not deployed this model yet. No, no. So this model deployment in our system, right? Yeah. This has not been deployed. As you can see, right? I mean, we cannot deploy it as it is, right? We need to, that is why we have these next to do actions, right? So unless we, you know, arrive at the most efficient model, we cannot deploy it in production. This was a very simple, you know, intuition on how we can approach the problem. It requires a lot of work, lot of research, lot of back testing. Because you can't, you know, you need to have back testing on at least six months, one year because the cycle of finance is very long. I mean, things that happen in February will only happen in the next February. Stuff like that, right? So are you aware of any other organization who has actually deployed using neural networks? I have many of my friends in other organizations. I don't recollect. See, I'll tell you again, as you know, this gentleman asked, right? Black box models are not favored. We need to explain that why, you know, we should trade on this. So that unless that particular aspect is solved, I don't think it will see the light of the day very soon. Hello, Anand. One more question. Okay, maybe one. Yeah. So mostly on the reconstruction side, so did you also check into the auto encoders? How well are these reconstructed firstly? And secondly, what all kind of auto encoders did you use for the entire process? Okay. So the testing of the auto encoders, right? I mean, see, we did not apply any, you know, labeled data. So since we did not have any labeled data, I can only visualize and see whether they are correct. And that is why I checked the risk return graph. And the risk return graph is proof enough that the results that I got through the auto encoder is correct. Does that answer your question? Applying to the last part of getting the frontier and all, right? So even the last part before that, did you do anything on the reconstruction side? How well are the reconstructions and all are there? Or is it just capturing some important information and not care about the reconstruction? Yeah, because we are not dealing with that because we don't want anything to come out from the reconstruction in the first problem statement. In the first problem statement, we just want to understand which ones got reconstructed. So let the model itself learn that it makes more sense to reconstruct this rather than this. So this is, I'm able to reconstruct is easier compared to this. Sir, he'll be available offline. You can just catch him offline. Great response to your talk. Let's have a round of applause for Anand.