Bias
ELI5 — The Vibe Check
In ML, bias means the model has systematic errors — it's consistently wrong in the same direction. A model with high bias is underfit: it makes the same kinds of mistakes no matter what data you show it. The word 'bias' also gets used to mean unfair discrimination in AI outputs (like a hiring tool that favors men) — same word, very different problem.
Real Talk
In the bias-variance tradeoff, bias refers to the error introduced by approximating a real-world problem with a simplified model. High bias means the model systematically underpredicts or overpredicts regardless of the specific training data. In AI ethics, 'bias' refers to systematic unfair treatment of groups, often inherited from biased training data.
When You'll Hear This
"There's a gender bias in the model's hiring recommendations." / "High bias is the underfitting problem; high variance is the overfitting one."
Related Terms
Overfitting
Overfitting is when your model gets TOO good at the training data and becomes useless on new data.
Training
Training is the long, expensive process where an AI learns from data.
Underfitting
Underfitting is the opposite of overfitting — the model hasn't learned enough and does badly on BOTH the training data AND new data.
Variance
Variance in ML means your model is too sensitive to the specific training data it saw.