Basic Math for ML

Those who would like to work in ML and AI field should build their math foundation. First, they should learn linear algebra. Here they should learn matrices and vectors. Matrices are grids of numbers. Vectors are lists. Data is often stored this way. There are operations like addition, multiplication and dot products.

Next, we should learn Determinats and Inverses. Determinants tell you whether a matrix can be inverted. It is used in optimization problems and solving systems of equations.

One more area — eigen values and eigenvectors which make us understand the variance in data. They are foundation of Principal Component Analysis. It helps us reduce dimensionality in data sets.

Lastly, one must learn Matrix Decomposition such as Singular Value Decomposition (SVD) used in dimensionality reduction and data compression. Let us turn to basic calculus. It is core to our understanding how models learn from data. First, learn derivatives and gradients. Derivatives measure how things change. Gradients are multidimensional derivatives. These are power optimization algorithms such as gradient descent. Models adjust their parameters to minimize error with the help of gradient descent.

You know the Chain Rule is central to neural networks. It enables backpropagation to work. It is a process to figure out how much each weight in the network contributes to the overall error. This makes the model learn more effectively.

Lastly, learn optimization basics — concepts such as local and global minima, saddle points and convexity.

To be competent in this field, you should learn statistics and probability.

Data is understood through statistics and probability. You should know Distributions (normal, binomial and uniform). You should also know variance and covariance to know the spread of the data and their correlation respectively. Bayes Theorem is a tool for probabilistic reasoning. It is foundation for algorithms such as Naive Bayes and Bayesian optimisation. You should also understand Maximum Likelihood Estimation (MLE) to estimate model parameters by detecting values that maximize likelihood of our data. It is basic to logistic regression.

Finally, you should know sampling and probability.

The resources for Linear Algebra and Calculus are 3 blue 1 brown’s Essence of Linear Algebra and Essence of Calculus series. To get a grasp of statistics and probability, you follow the videos of StatQuest. There is Mathematics for ML from Imperial college, London on Coursera. DeepLearning. AI has released a Math for ML specialization on Coursera.

The manga math books — Manga Guide to Calculus, Algebra and Statistics are good help.

The Mathematics for ML e-book by Deisenroth and colleagues is available free.

print

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *