mistralai/Mixtral-8x7B-Instruct-v0.1 cover image
featured

mistralai/Mixtral-8x7B-Instruct-v0.1

Mixtral is mixture of expert large language model (LLM) from Mistral AI. This is state of the art machine learning model using a mixture 8 of experts (MoE) 7b models. During inference 2 expers are selected. This architecture allows large models to be fast and cheap at inference. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks.

Mixtral is mixture of expert large language model (LLM) from Mistral AI. This is state of the art machine learning model using a mixture 8 of experts (MoE) 7b models. During inference 2 expers are selected. This architecture allows large models to be fast and cheap at inference. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks.

Public
$0.24 / Mtoken
bfloat16
32k
JSON
License
demoapi

3de0408ae8b591d9ac516a2384925dd98ebc66f4

2023-12-12T04:05:36+00:00