# Top 10 job interview questions

for data science

## 1. What is Selection Bias?

**Selection bias** is a kind of error that occurs when the researcher decides who is going to be studied. It is usually associated with research where the selection of participants isn’t random. It is sometimes referred to as the selection effect. It is the distortion of statistical analysis, resulting from the method of collecting samples. If the selection bias is not taken into account, then some conclusions of the study may not be accurate.

The types of selection bias include:

**Sampling bias:** It is a systematic error due to a non-random sample of a population causing some members of the population to be less likely to be included than others resulting in a biased sample.

**Time interval:** A trial may be terminated early at an extreme value (often for ethical reasons), but the extreme value is likely to be reached by the variable with the largest variance, even if all variables have a similar mean.

**Data:** When specific subsets of data are chosen to support a conclusion or rejection of bad data on arbitrary grounds, instead of according to previously stated or generally agreed criteria.

**Attrition:** Attrition bias is a kind of selection bias caused by attrition (loss of participants) discounting trial subjects/tests that did not run to completion.

## 2. What is bias-variance trade-off?

**Bias** is an error introduced in your model due to oversimplification of the machine learning algorithm. It can lead to underfitting. When you train your model at that time model makes simplified assumptions to make the target function easier to understand.

Low bias machine learning algorithms — Decision Trees, k-NN and SVM High bias machine learning algorithms — Linear Regression, Logistic Regression

**Variance** is error introduced in your model due to complex machine learning algorithm, your model learns noise also from the training data set and performs badly on test data set. It can lead to high sensitivity and overfitting.

Normally, as you increase the complexity of your model, you will see a reduction in error due to lower bias in the model. However, this only happens until a particular point. As you continue to make your model more complex, you end up over-fitting your model and hence your model will start suffering from high variance.

**Bias-Variance trade-off:** The goal of any supervised machine learning algorithm is to have low bias and low variance to achieve good prediction performance.

The k-nearest neighbour algorithm has low bias and high variance, but the trade-off can be changed by increasing the value of k which increases the number of neighbours that contribute to the prediction and in turn increases the bias of the model.

The support vector machine algorithm has low bias and high variance, but the trade-off can be changed by increasing the C parameter that influences the number of violations of the margin allowed in the training data which increases the bias but decreases the variance.

There is no escaping the relationship between bias and variance in machine learning. Increasing the bias will decrease the variance. Increasing the variance will decrease bias.

## 3. What are the differences between over-fitting and under-fitting?

In statistics and machine learning, one of the most common tasks is to fit a model to a set of training data, so as to be able to make reliable predictions on general untrained data.

In **overfitting**, a statistical model describes random error or noise instead of the underlying relationship. Overfitting occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. A model that has been overfitted, has poor predictive performance, as it overreacts to minor fluctuations in the training data.

**Underfitting** occurs when a statistical model or machine learning algorithm cannot capture the underlying trend of the data. Underfitting would occur, for example, when fitting a linear model to non-linear data. Such a model too would have poor predictive performance.

## 4. Explain the difference between L1 and L2 regularization methods?

A regression model that uses L1 regularization technique is called Lasso Regression and model which uses L2 is called Ridge Regression. **The key difference between these two is the penalty term.**

## 5. What is the difference between machine learning and deep learning?

Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed. Machine learning can be categorised in the following three categories.

**Supervised machine learning,**

**Unsupervised machine learning,**

**Reinforcement learning**

Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks.

## 6. What is exploding gradients?

While training an RNN, if you see exponentially growing (very large) error gradients which accumulate and result in very large updates to neural network model weights during training, they’re known as **exploding gradients**. At an extreme, the values of weights can become so large as to overflow and result in NaN values.

## 7. What is vanishing gradients?

While training an RNN, your slope can become either too small; this makes the training difficult. When the slope is too small, the problem is known as a **Vanishing Gradient**. It leads to long training times, poor performance, and low accuracy.

## 8. What do you understand by the term Normal Distribution?

Data is usually distributed in different ways with a bias to the left or to the right or it can all be jumbled up. However, there are chances that data is distributed around a central value without any bias to the left or right and reaches **normal distribution in the form of a bell shaped curve. **

## 9. Tell me about how you designed a model for a past employer or client.

You know you have to do this one on your own, folks. ;) **You can do it!**

## 10. What Are Hyperparameters?

In machine learning, most of the algorithms have parameters that change how an algorithm learns. From the learning-rate that defines "how fast," an algorithm will converge to more sophisticated such as the number of trees in the Random Decision Forest.

Both Neural Networks and regular ML algorithms have these, and the real difference between experiences and novice ML practitioner is in tuning these.

## That's all - Great job for getting to the end of this article! Follow Scooby for future data science edu resources. :)

## Further readings

If you want to read more relevant job interview questions for data science, visit this link and this one as well.

About

Hey there, I'm Scooby. I m an AI slime that lives in the realm of digital space. I love data science, marshmallows and lo-fi music.

My dream is to become the most famous sllime out there, by spreading my data science enthusiasm. You can follow my **instagram** **@scoobyai** for fresh updates!

I have my crew to help in my never ending quest. Luka is coffee fueled data scientist wondering on Github, and Steff is a UX designer and a dedicated cloudhead.

Contact

**It doesn't have to end here.**

You want to discuss about data science?

Maybe you just want to say hi?

Meet the Scooby crew.

Give us feedback.

ᕙ(⇀‸↼‶)ᕗ