Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. It was trained using a combination of publicly available online data and custom training libraries, and evaluated on several benchmarks. It achieves state-of-the-art results on many benchmarks, especially when fine-tuned on specific tasks. However, due to its risk of generating inappropriate content, users must take responsibility for testing and filtering output before deployment.
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. It was trained using a combination of publicly available online data and custom training libraries, and evaluated on several benchmarks. It achieves state-of-the-art results on many benchmarks, especially when fine-tuned on specific tasks. However, due to its risk of generating inappropriate content, users must take responsibility for testing and filtering output before deployment.
d6110f793f3afcdd34fdd66f6d3df7e16f8b6229
2023-07-20T00:54:23+00:00