Few-Shot Learning
ELI5 — The Vibe Check
Few-shot learning is teaching an AI by showing it just a few examples — like showing someone two pictures of a platypus and saying 'find more of these.' Instead of millions of training examples, you give the model 2-5 examples right in the prompt and it figures out the pattern. It's surprisingly effective and makes you wonder why we ever thought AI needed millions of examples.
Real Talk
Few-shot learning refers to the ability of large language models to learn new tasks from just a handful of examples provided in the prompt. The model uses in-context learning to identify the pattern from the demonstrations and apply it to new inputs. This emerged as a key capability of GPT-3 and has become a standard prompting technique across all modern LLMs.
Show Me The Code
# Few-shot classification
prompt = """
Classify the sentiment:
"Great product!" → positive
"Terrible service" → negative
"It's okay" → neutral
"Best purchase ever!" →
"""
# Model outputs: positive
When You'll Hear This
"Three-shot prompting got us 90% accuracy without any fine-tuning." / "Just give it a few examples — few-shot learning handles the rest."
Related Terms
In-Context Learning
In-context learning is the AI's ability to learn new tricks just from what you put in the prompt — without changing any of its actual brain weights.
LLM (Large Language Model)
An LLM is a humongous AI that read basically the entire internet and learned to predict what words come next, really really well.
Prompt Engineering
Prompt engineering is the art of talking to AI so it actually does what you want.
Transfer Learning
Transfer Learning is using knowledge a model already has from one task to help it with a different task.