Decision Boundaries Reveal Model Fit Issues

Decision boundaries separate classes in classification: lines in 2D, surfaces in 3D, hyperplanes in higher dimensions. Linear models (logistic regression, linear SVM) use straight boundaries, offering high interpretability but failing on nonlinear data like circles or spirals, causing underfitting—high bias, poor training and test performance. Nonlinear models (decision trees, random forests, kernel SVM, neural networks) create curved, flexible boundaries to capture complex patterns but risk overfitting by fitting noise, yielding high training accuracy yet poor test results due to high variance.

Underfitting happens when a simple linear boundary misses curved data structure, as in blue/red points separable only by curves. Overfitting occurs with 'snake-like' boundaries hugging every training point, memorizing quirks instead of patterns.

Bias-Variance Tradeoff Guides Optimal Complexity

Model performance follows a U-shaped curve: simple models have high bias (underfit), complex ones high variance (overfit). Learning curves diagnose: underfitting shows high, flat training/validation errors; overfitting shows low training error diverging from high validation error.

Linear models ensure generalization but underperform on real-world nonlinearity. Nonlinear flexibility models interactions but needs constraints. Goal: optimal complexity capturing structure without noise.

Practical Fixes and Real-World Application

Fix underfitting by switching to complex models, adding features, reducing regularization, or training longer. Combat overfitting with simpler models, L1/L2 regularization, dropout, more data, augmentation, early stopping, or cross-validation.

In medical imaging (ultrasound/radiology), small datasets cause overfitting to patient noise over disease features—use augmentation, regularization, co-teaching. Key: prioritize consistent unseen data performance over training perfection.