mistralai/Mixtral-8x22B-v0.1 cover image

mistralai/Mixtral-8x22B-v0.1

Mixtral-8x22B is the latest and largest mixture of expert large language model (LLM) from Mistral AI. This is state of the art machine learning model using a mixture 8 of experts (MoE) 22b models. During inference 2 expers are selected. This architecture allows large models to be fast and cheap at inference. This model is not instruction tuned.

Mixtral-8x22B is the latest and largest mixture of expert large language model (LLM) from Mistral AI. This is state of the art machine learning model using a mixture 8 of experts (MoE) 22b models. During inference 2 expers are selected. This architecture allows large models to be fast and cheap at inference. This model is not instruction tuned.

Public
fp16
65,536
demoapi

42a1ba7ede3e491b47fc3fdc4a61b7ebff9442e1

2024-04-10T20:23:58+00:00