In fine turning, we deal with pre-trained model, say an LLM or image classifier. It can be trained further on a specific task or dataset. The model then adapts to a new task by adjusting its parameters. At the same time, it leverages the knowledge it gained while being pre-trained. Thus, there is transfer learning. Transfer learning is an ML technique where a trained model on one task is repurposed or adapted for a different, but related task. There is no training from scratch. Transfer learning allows the model to use knowledge learned model to use knowledge learned from one task to another task, typically with less data and computation.
To illustrate, an image classification model trained on a large data set (with millions of labelled images) can be adapted to distinguish between various species of flowers. However, there is only a small dataset of labelled flower images. One can use transfer learning. A pre-trained model is fine-tuned on smaller dataset of flower images. The model transfers the knowledge it has gained previously of recognizing general features in images such as edges, textures and shapes. It has learned this from a large dataset, and it can quickly adapt to the specific task of classifying the flowers, though the data is limited.