Underfitting occurs in machine leanrning / data science when a data model fails to accurately capture the relationship between input and output variables, resulting in high error rates on both the training set and unseen data. This also entails that the model has insufficient training duration or the input variables lack significance to establish a meaningful relationship between the input and output variables. As the model learns, its bias diminishes, but its variance may increase, leading to overfitting. The objective in model fitting is to identify the optimal balance between underfitting and overfitting (.i.e., finding the sweet spot), allowing the model to capture the dominant trend in the training data and generalize effectively to new datasets.
Important details:
High biased model (underfitted) is not able to learn the very basic/important patterns in the training data.
Adding more data and making your model simpler won't help to avoid underfitting.
One should try other sophisticated models (e.g Decision tree in comparision to kNN) or add complexity in the current model.
Using complex models (example : polynomial regression rather than linear one) may be useful to capture the relevant patterns in the training data.
Adding more features (or derived features from existing one) will also increase the model capacity and helps to avoid underfitting.
If you see unacceptably high training error and test error, the model is underfitted.
High bias and low variance are the indicators of underfitting models.
Underfitting is easier to track than overfitting since the performance can be measured during training phase.