/fyn-TOO-ning/
Training an existing AI model on your specific data to specialize it for a particular task, style, or domain — like teaching a general doctor to become a cardiologist.
Fine-tuning takes a pre-trained AI model and trains it further on your specific data. The base model already understands language; fine-tuning teaches it your domain, your tone, your formats. It's like hiring a brilliant generalist and giving them six months of on-the-job training.
Fine-tuning is powerful but often misused. Most use cases that people think require fine-tuning can be solved with better prompting or RAG. Fine-tuning makes sense when: you need a specific output style consistently, you have thousands of high-quality examples, or you need the model to learn patterns that can't be expressed in a prompt.
The cost equation: prompting is free to iterate, RAG is cheap to update, fine-tuning is expensive and slow. Try them in that order.
When prompting and RAG aren't enough — typically for consistent style/tone, specialized formats, or when you have abundant training examples.
Knowing when NOT to fine-tune saves more money than knowing how to do it. It's the most over-recommended and under-needed technique in AI.
Fine-tuning a radio — the station (base model) already exists, you're just adjusting the dial for perfect reception.
A Mac app that coaches your AI vocabulary daily