LLMs are used to build apps — text generators, question answering, conversational bots and so on. Many new APIs are being created. LangChain is a Python library/framework for developing apps in LLMs. It gets connected to LLMs through APIs. It can also facilitate LLMs connection to a data source, and makes these aware of the environment.
LLMs on their own may not be very potent. LangChain connects them to external data sources and computation. It enhances their ability to provide better answers.
LangChain connects LLMs to our own data base. Apps are created around them. They reference the data. LLMs become data-aware and agentic.
The framework of LangChain have components which constitute LLM wrappers. These wrappers are popular language model APIs. (from OpenAI).
LangChain is Python library and is installed through pip command. Apart from LangChain package, packages like hugging face hub are installed to enable working with hugging face API’s.
To begin with, there was retraining of the entire LLM model or have to work with a different model for different tasks, say one model for translation. one for summarization and so on. However, these days we use Prompt Templates. With these templates, we can make LLM do anything, say different tasks such as translation, question answering, text generation, text summarization on different data.
For smaller apps, we can use LLMs in isolation while applying LangChain. For complex apps, it is better to chain LLMs, say two similar LLMs are chained or two different ones. LangChain is the standard interface between the chained LLMs.
LLMs do not have basic functionalities such as logic and calculation. Agents have access to tools and toolkits. Python Agent can use PythonREPL Tool to execute Python commands. LLM provides instructions to the agent on what code to run.
The components in LangChain Prompts, Chains and Agents can collaborate to generate powerful apps.