Foundation Models

Foundation models as a term got currency in August 2021. It was coined by the Stanford Institute for Human-Centred AI (HAI) Center for Research on Foundation Models (CRFM). These models are trained on a broad spectrum of generalized and unlabelled data. They are capable of performing a wide variety of tasks such as understanding of language, generating text and images and conversing in natural language.

As the original model provides a base or foundation on which other things are built, these are called foundation models.

These models are adaptable across various modalities — text, images, audio and video.

These models laid the groundwork for chatbots — process user inputs and retrieve relevant information.

Large language models (LLMs) fall into this category of foundation models. The GPT-n class are an example of this. Being trained on a broad corpus of unlabelled data, they are adaptable to many tasks. This earns GPT-n the tittle of foundation model.

print

Leave a Reply

Your email address will not be published. Required fields are marked *