 My name is Mrdul Mishra. I work with the Fidelity Investments senior director there. It's a financial services company, and you can make it out by the legal statement. So whatever I say, don't invest based on whatever I'm going to say. The topic I'm going to talk about today is explainable artificial intelligence. The interesting part really is, I think, Donald Trump, possibly a couple of days back, started blaming that Google is influencing too much of their decisions. Artificial intelligence, if you look at typical day since you started, whether it was Google Maps, Twitter, Facebook, whosoever used any kind of social media, there was some part of it. But that is possibly still the easier one. But now, considering it's being used for who gets parole or who gets into the jail, or it's being used for ultimately who gets elected as the most powerful person in the world, it becomes that much more important to figure out what's really happening. There are doctors who are getting replaced by it. There are key decisions that who gets the loan or who doesn't get a loan getting made by AI. So it becomes that much more important to figure out why a certain decision is being made. So what we'll talk about today is why, what, and how. And considering we are in financial services, there is a lot of very regulatory requirement that if you are making a certain call, you should be able to explain it. So we'll get into a bit of detail about it. So we'll talk about what it is, why do we need it? Why do we need it now? And then we will talk about some of the approaches. So one of the most interesting things I heard about AI with a lot of business leaders, which we keep talking to, was that it is helping us making our decisions better. But is it helping us making better decisions itself? Can we actually probe into those decisions and then figure out, is it really helping us the fair decisions which is actually good for everybody? And I have a bunch of data scientists which work with me. One of the most common conversations we have with them is, okay, so something need to be predicted, whether it's something on the text, something on a prediction for a particular stock price movement, and they will say, okay, so we need a bunch of input data, then there is a black box, and then magic happens, magicians comes with a prediction. The magicians are called data scientists here. And most of the time, there is a black box. There is a miracle somewhere occurring into it. And most often, nobody has a clue. So if you go back and say, hey, I think the accuracy is not right, they will say, let's try some other model, or let's figure it out if we can add some more features into it. But nobody is very clear in the way how it was really working. In investment management, like the way we work, there was a lot of quant models. Investment management changed 30 years back where most of the investing is now quant driven. But most of these models were fairly predictable. They were very linear model. If you give a certain input, the output is expected, and that's how it's gonna work. But now what has started happening is there is a lot of confusion about why it's happening. And what really, when we say XAI or explainable artificial intelligence we mean, is really AI which can be understood by humans. And I made the understood highlighted there because it's not saying explain it to individuals. It's saying understood by individuals. I mean, I have a daughter who goes to six standard now and it's quite funny because she will come with an answer during her math problem and a lot of times we'll say tell me the steps because you possibly have figured out the answer but maybe you haven't got the concept. And we are the guardians so we have to figure it out. The same way in my view we deal with AI. We say, okay, you've got the prediction right but actually explain it to me, how did you get to it? Which is in a way we human are like my data scientist colleague called human agents take that higher view of pedestal that we know it right, why don't you tell it to me? But before we go any further into it, I just wanted to question that notion that we are actually the higher ones. A lot of you actually followed the AlphaGoes and Lee Siddhal's conversation. I specifically want to talk to you about match two. There was a move 37 into it, which was supposed to be the match move which literally broke Lee Siddhal's any confidence into it. I just wanted to take you back on 10th of August when that game was getting played and because this is happening in India we are not encouraging smoking. Lee Siddhal go to smoke and AlphaGo just to play it don't think about the open FZ will be zero or not. So, Aja sees AlphaGo plays a move 37 and Aja puts a stone in the board. Wow, really, really, really good. Yeah, that value. That's a very surprising move. I thought it was a mistake. When I see this move, for me it's just a big shock. What? Normally human will never play this one because it's bad. It's just bad. We don't know why. It's bad. But it's a little bit high. Yeah. It's fifth line. Normally you don't make a shoulder hit on the fifth line. So coming on top of a fourth line stone is really unusual. Yeah, that's an exciting move. I think we've seen an original move here. That's the kind of move that you play go for. Hey, interesting stuff. That's fifth line shoulder hit. I wasn't expecting that. I don't really know if it's a good or bad move at this point. The professional commentators almost unanimously said that not a single human player would have chosen move 37. So I actually had a poke around in AlphaGo to see what AlphaGo thought. The point really, the couple of times and these are the leading co-players in Korea, in the world really, which actually couldn't figure out why did the computer do that. I stopped it there because there's a bit of discussion that when they did the analysis, they figured out there was one in 10,000 chance that that move could be played. It was never played by a human being ever. And the point really here is that the AI has reached a stage where we humans possibly are not even able to comprehend it and could come with a better decisions. But this was still a game of go. This was not life or death. It was not somebody's retirement earnings and you were making a call whether you invest on to it or not. When we talk about those decisions, whether you have cancer or not, that is where the explanation is required. But why I wanted to talk to you about that really was that AI has reached the stage where we possibly humans are not at the higher pedestal which we think we are and want explanation. So let's talk, why do we need explanation? So a bunch of people here, I have a dog. This is a Siberian husky. How many of you have dogs? And if you're a data scientist and you want a dog, typically what you will do is you will write a classification algorithm which will have computer vision into it and it will look at all the passing puppies and try to figure it out. If it's a, that Siberian husky is a very similar to wolf looking actually. If you look at it, even though it's a puppy, when it grows it becomes very much like a wolf. So if you write a classification algorithm which tells whether it's a puppy or it's a wolf, this is what comes out. Let's say your prediction algorithm is doing pretty well for one out of six cases, it wrongly classified it as a wolf and it was actually a husky. In every other case it was doing as expected so it was predicting correctly which actually is a dream, honestly, if you can get this level of accuracy right. There is a problem here and this was one of the most talked about in a lot of the explainable AI circles last year was that when they started looking into it how is it actually identifying whether it's a wolf or a husky, they realized that actually it was looking at the snow around it and what it actually created was a snow detector. So whenever it found that there is snow into it, it actually realized that it must be husky because dogs don't really do it, be there on the snow. Now there's something fundamentally wrong. I mean, there was another example. There is an AI company here which does radiology x-ray analysis and they trained a lot of images based on which to figure out if there is a malignant tumor into it and what they realized was it was giving 98, 99% accuracy but when they started looking into it, they realized actually it was looking at annotations which the radiologist has actually done so it was actually because the training data had those marking. It figured out if there is marking into it then it must be malignant which is not really the objective because then it is figuring something else out. So it is fundamentally something wrong unless you figure out how it's doing in that sphere, it becomes very important to do that. But there are four different views why we should look at explainability. So one of it is if there is a user he certainly need to figure out if his loan got rejected, why did it get rejected was that his saving was not enough, was that he has taken too much withdrawal, what could he do really to get it? But the second one really is from even from a data scientist perspective actually figuring out what really is happening and not just do hit and trial because there is actually a project called deep patient and I just have some no idea what's the love with the deep part in everything which they do where they figured out that people who have asthma and pneumonia they actually survive better and they started actually going down the thread saying that hey what does asthma has which is giving this higher chances of survival for pneumonia patients and then they realized actually what was just happening was there was more care being taken because if you have asthma, asthma is not really helping you with pneumonia but if you have just gone with the correlation and say hey asthma patients get better out of pneumonia it's actually giving absolutely wrong. So if you're a data scientist you should go deep into it and figure it out why did it happen. In a regulated industry like financial services I absolutely love GDPR when they started saying all the data subjects which is you and me have a right to go back to a company which has made its decision to explain it. I never thought myself as a data subject in this world but so far this is where we are going to be and there's going to be undictated someday. So if a regulator comes in regulated industries you have to go back and say exactly why was it done so there's really no choice and most importantly if you wanted funding whether it is your business or your boss you really need to understand how it works. I have been part of a lot of these conversations where we explain certain recommendation and the business person will be sitting there and then say okay these make sense explain it to me why did it happen and the funny thing will be he will say whatever made sense anyhow I would have done it so what's the value add the one which doesn't make sense which are counter-intuitive explain it to me why did you recommend that and you're like hmm let's try something else so I think it's very important even from a perspective of selling it that we get that part right. Before I go any further I think it's very important to know that explanations are the humans are not very logical creature all the work which has happened in behavioral sciences behavioral finance proves it that we think we are all rational minds but actually we are not we don't spend as much time with family as we want we don't invest for our future we all want to be getting up early and go for a run but most of us don't do eat that extra candy at the dessert at the counter so the learning has been always that you know what the way people will look at explanations is that it need to be contrastive which is if there has been a certain decision if somebody's loan got rejected and what we figured out was that his saving was only 5 lakh the next question will be ok so if it was 10 lakh would that have approved it how many of you have watched Roshomon Effect there's a 1954 couple of you great it's the classic Japanese movie those of you like black and white Japanese movie and it's there's one killing of a samurai and people look at it from different perspective each one of them they are telling the truth but everybody's truth is different and the point really is that if you are talking to different people in the group what you really realize is the way they are looking at that information is very different it's very contextual so the same explanation doesn't work at the level of the person whom the decision is impacting versus his boss versus the executive always it focuses on the abnormal so it's not whatever they were expecting nobody actually asked why did that happen so ultimately the point really is that anything which looks abnormal need to be explained what we learned really was that good explanations actually in some way are coherent with people anyhow what believed whenever we start challenging what they believed and a lot of cases where what happens is that there was a certain way certain decisions were being taken done whether it was a call to talk with a particular dealer or a broker or how trading gets done selecting a particular exchange humans have been doing in a certain way the moment model starts coming back which is contrasting with what they were doing explanation becomes that much more important and my learning really was it doesn't need to always need to be truthful it's good if they are truthful lot of times it just is a simplified version of reality because the real decision making is so complex that it is just not possible to be truthful in the legal sense of explaining all the causes of it it is good enough if you cover most of it so I was saying that in investment management industry which I work with we have been doing quantitative investment for 30-40 years now I mean from like 1980s that has been the fad and most of the time those quant models the way they used to work was kind of a linear model this is from one of the Mackenzie report and what they show that line which you see the green one how typical linear regression models used to work which was very monotonic very predictable how the answers will be but once you start getting into lot of those machine learning models what you start seeing is those graphs in this case it's a churn analysis for I think one of the telecom companies where the way relationships are coming out cannot be linearly represented like it so the representation itself has become very complex and I remember I mean not too long back maybe 10 years back the factors which we used to look for predicting where the market is going was as simple as let's say PMI which is purchasing managers index or we just used to look at inflation data what is coming out the job numbers which is coming out and we just based on that used to go ahead and predict where the market is going to be now with the big data and being able to capture everything any of these models typically runs into 100 to 200 factors now to just predict that same thing which was being done with three or four factors so both that we have more complex representations and we have lot more features has what really made it that much more difficult now the way we put internally the view on this really is that there is a choice to be made there are on one side interpretability concerns on the other side there is accuracy and like they say you don't get everything always there is always a choice to be made in terms of whether you want more accuracy or you want more linear model which is more interpretable but actually what we are trying to do really is when you show a graph like this everybody says hey we want best of both so what we really trying to do is can we start moving some of these approaches to the more of right where we still get an accuracy which is higher than linear model and still try to get interpretable it is still not there as it was easier with the earlier models but you are trying to get it but the other factor which plays the role here is the whole computation power so those of you who did the cap theorem in college possibly remember that you can only pick two of these if you really want a deep learning model and which need to be interpretable we really need to figure out what those different weights were and possibly give the sample feature set which all possible values the computationally it becomes so intensive that it is just not feasible to do it so there is really a choice to be made between three axis not even two which is what you want more accuracy interpretability and how much computation can you really throw at it so far that was the context I was putting in terms of what why do we need it and from here we will possibly move on to how it has been moving before I start talking about specific ones I have to say that this has been an area which research has been behind the adoption of it industries possibly have moved on a lot in terms of using AI without figuring out how do we explain it it was a lot like when electricity came everybody built it but they never thought about building circuit breakers when somebody's electricity plastered the whole grid they said you know what I think we need to do circuit breakers so lot of this is happening is still coming out of research still is something which is happening in the universities still coming out from the research wings of the large organizations and not necessarily something which is a productionized version of it which is available there are two broad buckets of it one of it is if we can explain globally how it's happening which is look for the model look for all the predictions and possibly explain it at the global level the second one is local local interpretability where you're looking at a single instance you're looking at a single prediction figuring out if for me coming from my office to this predicted 55 minutes why did it predict for me rather than saying everybody why the prediction was really that much and what we have seen really is the local interpretability is possibly slightly ahead if you're trying to do global it's still that much more challenging how many of you have heard used lime before yeah so lime has been possibly the most popular at least the most common one which has been talked about in at least one and a half two year part of it fairly simple concept you take it you perturb the input data which was there go ahead and use that to figure out how it gets done which lime we will talk about a bit more later but the more interesting one which has started coming is the counterfactual explanations the interesting thing which we figured out recently was like in US the credit score were always supposed to be explainable if somebody gives FICO which gives the credit score in US has been using neural network since 91-92 and they anyhow had to go back and explain it to it if they said your credit score is 820 why did they come to it so there was they had couple of patents which expired I think some time back which actually was explaining part of it and that gives one of the approaches to look at explanation like FICO's reason report the other thing is actually look at interpretable architecture and actually with the very start of your problem definition itself figure out if is it worth going to something as complex as a deep learning model or just we stay with something which is more interpretable because in some cases maybe you have to take it twice itself and then there are the future of it which is coming in terms of disposable latent features like lens so lime has been a method for fitting local interpretable model that can explain single prediction of any black box machine learning model so it is model agnostic in a lot of ways it doesn't care what you did with internally they are based on local surrogate model so what they try to do is if the representation which you have is very complex and what you are looking for is a specific area where you need explanation what they do is they say let's define a local surrogate model for that specific area which because it's trying to focus on a very specific area it could be a lot simpler version than the complete model of it and then they generate a data set consisting of different samples which are there and that data set they train an interpretable model is weighted by the proximity of the sampled instance which you were trying to explain so they look at how close is the sampled instance of what you were trying to explain and then use it and one of the cutest example which they have which possibly gets used a lot really is a Labrador playing a guitar which gets explained and the way they have done really is what you see down is they have created many versions of it they have perturbed it great different parts of it and what they show really is that the probability of each one of them matching to it so in this case the face of it kind of gives away that it was a Labrador if you just look at the body of the guitar it was giving it's acoustic guitar but if you look at the neck of the guitar it says it possibly might be electric guitar and it gives that higher probability it's slightly confusing picture because the body of the guitar is acoustic but the head of the guitar or the neck of the guitar is electric but what it is trying to show really is that if I'm telling you that this was a Labrador playing an electric guitar why did I say that? which is very useful in a lot of scenarios as we were talking about it works for other cases it works for text inputs it creates separate data sets for the text input which are there perturbs it and then creates which one of those features actually resulted in the final prediction really it's pretty good for localized explanations if your need is being able to explain a specific prediction and I think it's possibly the best to get started with if you need it I think it's possibly the simplest one you get going in a day or two humans interpretation is pretty good it doesn't try to be the most complicated interpretation so it's not exhaustive but it figures out what are the most important features which need to be used for explaining it and because it's not exhaustive it doesn't work if you work in an industry where everything need to be explained to auditors or regulators for each and every decision it doesn't work in a lot of those scenarios the counterfactual explanation this is I think fairly intuitive way it works lot of work was done in Microsoft research they did some work earlier this year I think DeepMind published a paper late August on to it to say how do you how you can start using it there are a couple of GitHub projects which try to implement it currently the way I find it very interesting it really tries to be the way humans think about it so once you find something once you find I mean in Bangalore if you stay the traffic is the most biggest thing to worry about if I have to go back home there possibly are three routes and if it says take it YMG road I will always think hey but YMG road why can't I just go and take the outer ring road and possibly go it so it just really for every prediction says that if X had not occurred or something else would have happened what would have been the output which is very human way of thinking so if you're looking at a prediction and there are 20 feature sets education level it has age it has gender it has the race of it most time what you're trying to figure out really is that if the gender changed is there an angle to it is there if the age was slightly more will that change it which is very intuitive way of thinking it and it kind of tries to simulate that which is very interesting so the way it works is fairly simple we change the feature value of an instance before making the prediction change and that tells how much was the impact of it again it's used for very instance specific predictions that's an example which gets used a lot in the counter factual explanations and the way to read this is that you're trying to predict students average grade of the first year at the law school based on the GPA before they joined it so what you see as column 2 is the GPA before they joined the school the law school entrance exam which is column 2 and the race and 0 being white and 1 being black and then what it did was it created a counter factual data which is it created a GPA which is quite similar to what was the original GPA it created law scores which is quite different than what was there but it did something very racial it actually converted most of the people to whites to show that actually if you want average grades then there is a very strong correlation that you need to be why it's a fairly racial model to look at but it gives you a picture that underlying model possibly would not have any house stood if it was to be explained if somebody goes to court and say why the GPS score was not predicted right so in a way it gives you an answer that race and possibly your law score law school entrance exam results are one of the key features the implementation of very early stages I mean there are a few GitHub projects which are there we played around a bit with it the interpretation where it works is fairly clear I personally think it's very intuitive to think in that lines it's model agnostic it doesn't do anything specific to a model and it's fairly easy to implement but if you are playing around with it I think it's fine if you want to use it in a production like scenario maybe it's not there I personally thought this was a very interesting concept the reason reporter and this is I mean if you go to FICO and search for reason reporter you'll find the earlier patents which they had on this which have become open now I can give you the way this works really is that let's say a new data point come and credit score was explained what they do is they kind of bin things so they look at different features and then they put scores and then they put bin so what they realize that let's say let's say first feature is the salary of the individual and they will say if you are getting between hundred and hundred twenty thousand dollars your score will be eight hundred fifty if you were thirty to forty thousand dollar it will be six fifty and when they have to actually explain that way they put kind of bins for each and every feature and then when they have to go and explain why was a certain score given they go back and figure out which is the closest user data matching with the what the output has been so in this case eight sixty was the output they realize that actually eight fifty was fairly close the feature bin for that feature one was in that range so what they go back and say is that reason code is sixty five and the user possibly is for that feature is in that feature bin so it rather dealing for individual values which are there it actually tries to put things into bin it's a very useful concept because if you think about a lot of discrete data sets which are there stock prices maturity dates are born it's very important to group them into something which is human comprehensible and something which can be related rather than dealing with each and individual values of it which is a very interesting implementation of it the other thing which we saw was that whatever model does there always will be some human knowledge or a domain knowledge need to be put specifically if you are in a regulatory industry a lot of time data will tell that a certain gender or a certain race or a certain age group has been given certain privileges and data keeps on telling it that if somebody comes back with the age beyond 60 maybe you don't give him house loan because it has never been given now the problem with that really is that if regulator come back and they figure it out that you have been discriminating against people because the age was higher that becomes a problem so what we've started realizing it that you need to still have some rule and in this case like the example which I was talking about if you realize that age has been a critical factor for model to actually score somebody's credit score down you manually flip it either you go back and impute the data and say I will make sure that people above 60 also start getting it or add it to the final score card and say that fair enough let model predicted but I will add some extra points just because if you are in that age group because we have seen a lot of this discrimination based on the data so that's the real world problem because most of the data sets which we get will be skewed in some way based on how human agents have been working on to non-ideal world and going forward if you have to start making it more and more come to what we want the world to be there has to be that domain knowledge part of it added to counterbalance what the data was trying to say. So with that this possibly is the last slide in terms of summary I hope I have been able to convince you that it's a very important for acceptance of model learning machine learning that we are able to explain it it's not only for the users to know or for auditors to know but even for the practitioners who are building these models to know because a lot of times there is something more hidden than what meets the eye and if it gets recognized by a regulator or in the production that's a lot bigger so it certainly need to be a focus area for everybody it's not yet there it's possibly will take a year two year from now for it to become mainstream and get adopted into it a lot of work is happening I mean recently we are talking to sales for Einstein team and they have done explanations on some of the decisions which it tries to make and it will keep on getting better as it goes along but that's in focus area which need to be there and if you are in a regulated industry which makes these critical decision there really is no choice I think before it goes into production you really need to get that act together in terms of making the model screenable so that was all the conversation I had in the slide thoughts questions feedback it's not very good question actually and one of the reason of putting that computation slide really is just becomes prohibitive really in terms of computation to actually add it for every record the way at least we have been looking at really is that there are some use cases where there is no choice and what we make sure is that once the model has been evaluated based on the accuracy part of it we actually make sure that one of the part of QA before it goes into model really is something which is explainable using one of these model I'm not allowed to say which one it works really but there are very few use cases where each and every one of them need to be explained to be explained at that level where interpretability requirements are that high one of things which we have learned really is at the very early stages when we are even having a conversation that what is the business problem we ask this question so ultimately how interpretable do you need could it just be I mean if you're building a model that who will click on your ad really don't care I mean it's okay it's few cents here and there no big deal but if you're going to predict somebody's X-ray is malignant or not interpretability is that very important maybe cost for computation is justified so I don't know if there is a that clear a framework which I was you hoping but that's how we were trying to yeah so I mean the way at least the thinking has been that it should be part of your QA process in a lot of ways especially for those use cases where the auditors might come back and you would have to explain into this should get sued for a specific decision or a life or death kind of conversation so it starts getting very specific to those use cases where there's no choice even though it's computationally very costly it possibly need to be it will delay overall roll out of your model but it is for those industries those use cases it's better to possibly be safe sorry I have a question sorry first of all I have to apologize I join in the middle so I didn't fully see what you present there I'm from the financial services domain so of the techniques which you so in financial services you have to be very clear in explaining to a customer what is that you have to change so that your credit score can improve okay so solutions like lime is not like surgically enough to clearly communicate so of the techniques you have seen is there a clear recommendation we have of being able to explain a deep learning score of what the customer should do to improve the score I mean I think what you should possibly look at what FICO did with the reason reporter and trying to bin it because the point really is when the customers are asking for it they are not really very specifically trying to figure out what your model did more often than not the question is really so what should I do to make sure my credit score goes up or my loan gets approved and what if you come from that angle to figure out which of those should be taken to change the decision I thought that reason reporter does a pretty good job of abstracting the whole problem and then coming back and saying these are the reason quotes you should look back and then if you look at your feature bins it in a ways goes back and says this is what you should work on really so I think that possibly is something worth looking at specifically in financial services thank you give me an example of what you are saying I am not sure Ray we used to have linear models whether it was logistic regression or anything which was explainable this is the simple coefficient you add it and possibly get it but like I was showing you I mean if your representation starts getting too complex which it does in lot of cases specifically deep learning if your feature set runs into 100 somebody backward creating that would be quite a challenge I haven't seen it personally so if it is possible great but I haven't seen it excuse me I have a small question now one of the ways of finding out the explainability in a model is probably using an old technique of AI when in the days of symbolic AI the reasoning and logic frameworks were there so are there works going on we are going to combine them with deep learning models or something called planning and learning and things like being tried or attempted yeah so I mean you are right the world was fairly simple at that point of time an example I was giving you really was that I mean when we used to predict market movement the factors which we used to look were fairly straight forward we used to look job data inflation data what is the confidence level which is coming out and in those cases it was like expert systems where domain lot of people how many of used expert systems before a bunch of people yeah those who haven't heard this was like what 90s 90s AI 70s AI mid 90s AI yeah and in those times it was pretty easy to go back and start explaining because it was in a lot of ways it was coming out of somebody's mind the feature set was fairly limited it was not easy to describe I mean once we get into hundreds of features 200 300 and nobody really knows how they're correlating it's very difficult I mean the only way really is the crudest ways impute a future remove it see the impact impute the other one and see what the impact it are look at the role level and counterfactual do it I don't know if that golden era of explainability will come back because it's lot more complicated word now okay so this is getting into a different range of philosophically way about it but I mean I don't understand how my car works really I mean I expect a functional job of it to do take me from point A to point B it has been working as industry it has reached the level of credibility but if you watch old Hindi movies and all they used to be those scenes where the smoke will come out and you actually open the hood and see where to water to be put so there was a time where you actually need to know it maybe we will reach the stage we will start trusting the decisions which are coming out with AI algorithm and say okay if machine has said it it knows more than it we will get that level of credibility but we are at a very early stages and the decisions it is making are life changing for people so my take on this will be maybe society will accept it maybe regulators will say if machine has said it it's move 37 we don't know whether it is good or bad but human just don't get it we are not there yet I mean that is more like a fundamental major for calculation how yeah yeah yeah no I haven't actually no I haven't but it's called as shapely okay I will certainly look at it correct correct yeah H2 is doing some great work absolutely in that area there will be vendors here so yeah maybe with no comments on specific ones cool can we put some methods rather than manually? AI to explain the AI yeah so a lot of this is in a way generating even if you use Lyme in a lot of ways it uses a surrogate model which generates those explanation but the point really is this is for human consumption so if finally you have to convince a user you don't want him to game whatever machine is giving you at least at this stage we still need that kind of verification to be there I agree I mean yeah it's you need to be certain level of tall before you try achieving it and if you're not verging your model and data then yeah some way to go totally agree somebody was saying something there some comment feedback yeah yeah I mean fairly valid point and the funny thing is somehow we humans have this urge to understand everything it's not like we understand so if you go and listen to the news market goes up they will say oh the market has gone up because this data came back and because of certain geopolitical tension and if market goes down the same reason will be used we have that urge I mean I remember when Trump was getting elected everybody was betting on Hillary to win and we were saying if Trump wins it will be a disaster it's two year and we can give reasons he gives reasons better than I can give I think there is more psychological reason to ask for justification rather than really a need to understand it and I get your point I mean in some way we don't really understand a lot of complex system once that level of trust gets built maybe nobody will ask it's just a commodity but if you watch any news channel I think all the experts are trying to give justification for any and everything but the world is just too complex to be explained in those soundbites so I think it's more a psychological urge than that technical thing really cool thank you