 Hey guys, I came across this blog post on 41 machine learning interview questions by Roger Huang It has some pretty interesting questions that covers concepts applications and research in machine learning They have decent answers too, but I'll provide my own interpretation for 10 of these questions Depending on how you guys like this. I'll make answering interview questions a series on the channel No, these are interview questions. So some responses will be terse You don't have time for stories over the phone. That said, I'll highlight some questions that I found interesting so that you get a better understanding Note that at any point in the video if you think that I've answered something incorrectly or have a better explanation then feel free to call me out in the comments down below and Provide your own intuition. We can all learn from that and with that. Let's get started What is the difference between probability and likelihood? two entities involved here Data, which is a set of observations D Now D is a set of N samples with their corresponding labels For example in an email classifier to classify a given email as spam or not spam X is an email and Y can be a binary label spam or not spam The second entity is a model now this model performs some task like the email classifier just talked about It's parameterized by some theta Let's say these two entities are connected by some function F F is basically the model and this model has two phases a training phase and a testing phase During the training phase, you don't know theta, but you're given some training data D So this is like we have a bunch of emails and we also have their labels spam and not spam We use this training data to learn the parameters of our classifier. That is learn theta During this training phase given some D. We want to find theta that maximizes performance This is the essence of a method of optimization called maximum likelihood estimation After training we have the model so we have the email classifier as we now have the theta What we can do now is Testing the model on unseen data So we give the classifier some unseen emails and then it spits out whether the email is spam or not spam It's during this testing phase that given some theta We determine the chance of observing this outcome. That is probability so the difference Probability during the testing phase given a theta we determine the probability of observing the outcome and likelihood during the training phase given some outcome We determine the theta that maximizes the probability that such an outcome occurred Although they are similar in math notation. They have very different meanings Note that this is just my take on the difference. I had to reference certain sources to make sure I saw a great thread on stack exchange with different interpretations So check that out in the description Like I said in the beginning of the video if you have a better explanation to this answer or any of the answers that I provide Feel free to call me out in the comments down below What is base theorem and how is it useful in the machine learning context Base theorem allows us to determine posterior probabilities from our priors when presented with evidence or more simply a method of revising existing predictions given new evidence So how much more likely a is than b now is equal to how much more likely a was than b before we saw our new evidence Times how much more likely this evidence would be to occur if a were true than if b were true In machine learning base theorem forms the fundamental assumption of the naive base classifier a generative model for classification What is the difference between a generative and discriminative model? discriminative models learn decision boundaries between classes generative models learn the distribution of the classes themselves SVM is discriminative because we are creating a decision boundary. It is a maximum margin classifier after all Logistic regression is also discriminative as we learn a linear decision boundary Decision trees are also discriminative as each of the non-leaf nodes will partition space creating boundaries naive base classifiers are generative as they learn the distributions of the classes themselves Another difference is susceptibility to outliers Say we build a system where the distribution of the cross-validation or test data is different from the training data It is much easier to adjust the distribution then change the nature of a decision boundary So generative models work better with outliers here Again, if you are certain that the test data classes will have the same distribution as your trained data classes Then it might not be of concern. However, this can very well happen in the real world a mathematical take During the training phase many parametric models have the same start point maximum likelihood estimation This is what I talked about while discussing the first question distinguishing probability and likelihood During the training phase given some data. We determine the parameters of the model data such that the model performance is maximized In discriminative models, we maximize the conditional likelihood. That is we maximize conditional probability given the model parameters While in generative models like the naive base classifiers, we maximize the joint likelihood That is the joint probability given the model parameters What cross-validation technique would you use on a time series data set? In normal cross-validation say k-fold we split the data into k equal size chunks Use k-1 chunks as training and the remaining chunk for testing the model We can then average the performance of all the k-tests to give some performance measure A special case when k is equal to n is leave one out cross-validation However time series data isn't just a bag of points We cannot include samples in the train set that occur later in time than the test point So while performing leave one out cross-validation We select a point as a test set and only include the points that occur before it Temporally as the train set There are situations when we may want a multi-step forecast So we only include the points in the train set that are taken at least some time t before the test point How is a decision tree pruned pruning involves the removal of nodes and branches in a decision tree to make it simpler So as to mitigate overfitting and improve performance Say we constructed a decision tree and we have a validation set For each leaf node we can determine the node purity Ideally, we want the nodes to be as pure as possible for high accuracy But it's very easy to overfit so much so that the leaf nodes may only have a single data point We can mitigate this by pruning the decision tree Consider a method called cost effective pruning with the validation data determine the performance of the original tree t Now consider the subtree t1 and remove it from the original tree replace a subtree with a leaf if the validation set Doesn't have the significant difference in performance. We consider the simpler tree. Occam's razor comes to play here if pruned consider the prune tree as the original and Continue to the next subtree How would you handle an imbalanced data set if you have a lot of data to work with in the underrepresented class Then we can try random under sampling this involves getting rid of the overrepresented class samples from the training data You don't have to target a perfect one is to one ratio, but something close should work just fine If you don't have too much data to work with we can perform random oversampling Take the underrepresented class and sample with replacement until we get at the required ratio Synthetic monetary oversampling or smote is a technique where we can synthesize new data with minor distortions of existing samples Rather than just copies Ensemble learning algorithms tend to work well for data imbalances typically boosting bagging and random force techniques. Why? Aggregation tends to mitigate overfitting of a specific class What evaluation approaches would you work to gauge the effectiveness of a machine learning model? You can talk about cross-validation here. I'm not going to explain it too much because it's a subset of the other answers How do you handle missing or corrupt data in a data set to deal with missing values? We could perform data imputation The big idea is that if there is data that is missing you add a value But that data can be of different types for categorical values You can add a new category like no clue or other For numeric types you can impute with 0 and add an indicator variable showing that the value is missing the model will compute it for you How would you deal with outliers? Analyze the data with and without outliers We don't know if removing them is going to have some adverse effect after all That said there are two methods of dealing with outliers One is trimming where we delete the outlier all together and the other is winds arising Where we seal or floor the value to the closest that is either the maximum or minimum acceptable non outlier value This is usually the preferred technique Why us this question can be asked while applying to any role in any company Come up with an answer that's more than just a recruiter contacted me So I think I'm a good fit Ideally your response should tie your background with the current role proving that beyond any doubt You are a perfect fit as a grad student. I'm currently doing this on my job hunting escapade and That's it. That's 10 data science interview questions answered Hope this helps you crack those interviews or if you're not looking for a job You at least learn something out of this video If you like my teaching style show some love with like and comment down below your interview experiences I love to read them and that's bound to help someone out there just like you Now that the video is over a little about myself My name is Ajay Hawthor. I run a YouTube channel covering aspects on deep learning machine learning data sciences and other Frontiers of artificial intelligence If you want to get hooked on trending deep learning research or understand the mechanics behind some fundamental machine learning concepts Then subscribe to code Emporium links are in the description down below Thanks for sticking until the end if you still haven't had your daily dose of knowledge Click or tap one of the videos on screen for an awesome video, and I'll see you in the next one