AskAI.Free
Beta
Navigation
Back Professions
Back Dating
Back Writing Tools
Back Programming Tools
📚 Glossary

Fine-tuning

In one line: Training an existing model on additional examples to specialise it for a domain — like making ChatGPT write in your company's voice.

Fine-tuning takes a pre-trained foundation model and trains it further on your own dataset. This specialises the model for a domain — making it write in your tone, follow your formatting conventions, or know your product.

Fine-tuning costs: typically $5-$500 to train, similar inference costs to the base model. Common providers: OpenAI (fine-tune GPT models), Anthropic (Claude fine-tuning is limited), Together / Replicate (open-weights fine-tuning at lower cost).

For most use cases, prompt engineering + RAG is faster and cheaper than fine-tuning. Reach for fine-tuning when prompt engineering hits a ceiling.

See it in action — ask any AI about fine-tuning on AskAI.free.

Try it free →