Skip to content

Float

Easy — everyone uses thisGeneral Dev

ELI5 — The Vibe Check

A float is a number with a decimal point — 3.14, 1.5, -0.001. The name comes from 'floating point' because the decimal point can be anywhere. The tricky thing: computers cannot store ALL decimals exactly, so 0.1 + 0.2 does not always equal exactly 0.3. This surprises every beginner programmer.

Real Talk

A floating-point number (float) is a numeric type that represents real numbers with fractional components using the IEEE 754 standard. Single precision (float32) uses 32 bits; double precision (float64) uses 64 bits. Floating-point arithmetic has inherent precision limitations — not all decimals can be represented exactly in binary, causing rounding errors. Never use floats for money — use integers (cents) instead.

Show Me The Code

// The classic floating point surprise:
console.log(0.1 + 0.2);         // 0.30000000000000004
console.log(0.1 + 0.2 === 0.3); // false!

// Why? Binary can't represent 0.1 exactly
// Fix with epsilon comparison:
const isEqual = Math.abs(0.1 + 0.2 - 0.3) < Number.EPSILON;

// Money: NEVER use floats
// BAD:
const price = 19.99; // might lose cents
// GOOD:
const priceInCents = 1999; // integers are exact

When You'll Hear This

"Use integers for money, not floats — floating point errors will bite you." / "The 0.1 + 0.2 bug is a classic float precision issue."

Made with passive-aggressive love by manoga.digital. Powered by Claude.