Zephyr 7b is also an LLM developed by Hugging Face, using 7 billion parameters. It is a model on the lines of GPT. It is fine-tuned to be more helpful and informative. It has been trained on public datasets and synthetic datasets using DPO — direct preference optimization.
It outperforms many other models. It generates more fluent and informative text. It follows instructions better.
It can be used for NLP or natural language processing.
It is still in the making. It should be used for academic purposes only. It could generate problematic text.