The first LLM that comes to mind is GPT-4 which is released in March, 2023. It accepts both text and images as input. It hallucinates far less than ChatGPT-3.5. GPT-4 has been aligned with reinforcement learning from human feedback. It has been trained on a massive 1+ trillion parameters and supports a context length of 32000 tokens. Probably, its architecture has 8 disparate models with 220 billion parameters each. Its weakness is that it is slow to respond. The inference time is much higher.
GPT-3.5 is another LLM. It is incredibly fast. It generates response within seconds. It has a context length of 16000. Its weakness is that it hallucinates a lot.
The third LLM model is PaLM-2 from Google. Its forte is logic, math and coding in 20 plus languages. It is trained on 540 billion parameters. It has context length of 4100 tokens. Google has released four models based on PaLM2 in different sizes — Gecko, Otter, Bison and Unicorn. This model is multi-lingual. It can understand idioms, riddles and nuances of different languages. It is quick to respond.
You may not be aware but Anthropic has developed an LLM called Claude v1. It is backed by Google. It builds assistants which are helpful, honest, and harmless. The largest content window is 100000 tokens. In a single window, 75000 words can be loaded. Cohero is another model with just 6 B parameters. It works for enterprises.
Technology Innovation Institute (TII) has introduced Falcon which is open source LLM. Facebook has released LLaMa models in various sizes, all open source. They have 7 billion parameters to 65 billion parameters. Guanaco-65 B is an LLaMA-derived model. Vicuna 33B is another open source LLM derived from LLaMA. MPT-30B is open source model that computes with LLaMA-derived models. It has context length of 8000 tokens. 30B-Lazarus is developed by CalderaAI and it uses LLaMA as its foundational model. WizardLM is opensource LLM that is built to follow complex instructions. GPT4ALL runs local LLMs on your computer without any dedicated GPU or internet connectivity.