Karan Shah
5 min readJan 16, 2021

Realizability and Sample Complexity in Machine Learning

Photo by Ioan F on Unsplash

Machine Learning from First Principles: Blog Post 4

In the last blog we looked at the problem of Overfitting which arises when we have a hypothesis exactly learns the training samples as they are and fails to generalize on unseen samples. We also saw that in order to solve this issue, we can introduce a large class of Hypothesis, H in which the best hypothesis resides. This is called Inductive Bias in ML.