We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

🚀 New models by Bria.ai, generate and edit images at scale 🚀

BAAI logo

BAAI/

bge-en-icl

$0.010

/ 1M tokens

A LLM-based embedding model with in-context learning capabilities that achieves SOTA performance on BEIR and AIR-Bench. It leverages few-shot examples to enhance task performance.

BAAI/bge-en-icl cover image

Input

inputs
You can add more items with the button on the right

You need to login to use this model

Login

Settings

ServiceTier

The service tier used for processing the request. When set to 'priority', the request will be processed with higher priority.

Normalize

whether to normalize the computed embeddings

Dimensions

The number of dimensions in the embedding. If not provided, the model's default will be used.If provided bigger than model's default, the embedding will be padded with zeros. (Default: empty, 32 ≤ dimensions ≤ 8192)

Custom Instruction

Custom instruction prepending to each input. If empty, no instruction will be used.. (Default: empty)

Output

[
  [
    0,
    0.5,
    1
  ],
  [
    1,
    0.5,
    0
  ]
]
Model Information

BGE-EN-ICL

A large language model-based embedding model that supports in-context learning for enhanced task adaptation. Key features:

  • In-context learning with few-shot examples
  • SOTA performance on BEIR and AIR-Bench benchmarks
  • Flexible usage through FlagEmbedding or HuggingFace Transformers
  • Supports both zero-shot and few-shot scenarios
  • 7.11B parameters with F32 precision

For implementation details and usage examples, visit our GitHub repository.