Float
ELI5 — The Vibe Check
A float is a number with a decimal point — 3.14, 1.5, -0.001. The name comes from 'floating point' because the decimal point can be anywhere. The tricky thing: computers cannot store ALL decimals exactly, so 0.1 + 0.2 does not always equal exactly 0.3. This surprises every beginner programmer.
Real Talk
A floating-point number (float) is a numeric type that represents real numbers with fractional components using the IEEE 754 standard. Single precision (float32) uses 32 bits; double precision (float64) uses 64 bits. Floating-point arithmetic has inherent precision limitations — not all decimals can be represented exactly in binary, causing rounding errors. Never use floats for money — use integers (cents) instead.
Show Me The Code
// The classic floating point surprise:
console.log(0.1 + 0.2); // 0.30000000000000004
console.log(0.1 + 0.2 === 0.3); // false!
// Why? Binary can't represent 0.1 exactly
// Fix with epsilon comparison:
const isEqual = Math.abs(0.1 + 0.2 - 0.3) < Number.EPSILON;
// Money: NEVER use floats
// BAD:
const price = 19.99; // might lose cents
// GOOD:
const priceInCents = 1999; // integers are exact
When You'll Hear This
"Use integers for money, not floats — floating point errors will bite you." / "The 0.1 + 0.2 bug is a classic float precision issue."
Related Terms
Integer
An integer is a whole number — no decimal point. 1, 42, -7, 1000 are integers. 1.5 is NOT an integer, that is a float.
NaN (Not a Number)
NaN means 'Not a Number' — it is what JavaScript gives you when math goes wrong in a weird way. Try to parse a word as a number and you get NaN.
Precision
Precision asks: 'Of all the times the AI said YES, how often was it actually right?
Type
A type tells the computer what kind of thing a value is — is it a number, text, true/false, or a list?