mistralai/Mixtral-8x22B-v0.1 cover image

mistralai/Mixtral-8x22B-v0.1

Mixtral-8x22B is the latest and largest mixture of expert large language model (LLM) from Mistral AI. This is state of the art machine learning model using a mixture 8 of experts (MoE) 22b models. During inference 2 expers are selected. This architecture allows large models to be fast and cheap at inference. This model is not instruction tuned.

Mixtral-8x22B is the latest and largest mixture of expert large language model (LLM) from Mistral AI. This is state of the art machine learning model using a mixture 8 of experts (MoE) 22b models. During inference 2 expers are selected. This architecture allows large models to be fast and cheap at inference. This model is not instruction tuned.

Public
fp16
64k
mistralai/Mixtral-8x22B-v0.1 cover image

Mixtral-8x22B-v0.1

Ask me anything

0.00s

Mixtral-8x22B is the latest and largest mixture of expert large language model (LLM) from Mistral AI. This is state of the art machine learning model using a mixture of 8 experts (MoE) 22b models. During inference 2 experts are selected. This architecture allows language model inference to be fast and cheap. This model is not instruction tuned.