shibing624/text2vec-base-chinese cover image

shibing624/text2vec-base-chinese

A sentence similarity model that can be used for various NLP tasks such as text classification, sentiment analysis, named entity recognition, question answering, and more. It utilizes the CoSENT architecture, which consists of a transformer encoder and a pooling module, to encode input texts into vectors that capture their semantic meaning. The model was trained on the nli_zh dataset and achieved high performance on various benchmark datasets.

A sentence similarity model that can be used for various NLP tasks such as text classification, sentiment analysis, named entity recognition, question answering, and more. It utilizes the CoSENT architecture, which consists of a transformer encoder and a pooling module, to encode input texts into vectors that capture their semantic meaning. The model was trained on the nli_zh dataset and achieved high performance on various benchmark datasets.

Public
$0.005 / Mtoken
512

Input

inputs
You can add more items with the button on the right

whether to normalize the computed embeddings 2

You need to login to use this model

Output

shibing624/text2vec-base-chinese

This is a CoSENT(Cosine Sentence) model: shibing624/text2vec-base-chinese.

It maps sentences to a 768 dimensional dense vector space and can be used for tasks like sentence embeddings, text matching or semantic search.

Evaluation

For an automated evaluation of this model, see the Evaluation Benchmark: text2vec

  • chinese text matching task:
ArchBaseModelModelATECBQLCQMCPAWSXSTS-BSOHU-ddSOHU-dcAvgQPS
Word2Vecword2vecw2v-light-tencent-chinese20.0031.4959.462.5755.7855.0420.7035.0323769
SBERTxlm-roberta-basesentence-transformers/paraphrase-multilingual-MiniLM-L12-v218.4238.5263.9610.1478.9063.0152.2846.463138
Instructorhfl/chinese-roberta-wwm-extmoka-ai/m3e-base41.2763.8174.8712.2076.9675.8360.5557.932980
CoSENThfl/chinese-macbert-baseshibing624/text2vec-base-chinese31.9342.6770.1617.2179.3070.2750.4251.613008
CoSENThfl/chinese-lert-largeGanymedeNil/text2vec-large-chinese32.6144.5969.3014.5179.4473.0159.0453.122092
CoSENTnghuyong/ernie-3.0-base-zhshibing624/text2vec-base-chinese-sentence43.3761.4373.4838.9078.2570.6053.0859.873089
CoSENTnghuyong/ernie-3.0-base-zhshibing624/text2vec-base-chinese-paraphrase44.8963.5874.2440.9078.9376.7063.3063.083066
CoSENTsentence-transformers/paraphrase-multilingual-MiniLM-L12-v2shibing624/text2vec-base-multilingual32.3950.3365.6432.5674.4568.8851.1753.674004

说明:

  • 结果评测指标:spearman系数
  • shibing624/text2vec-base-chinese模型,是用CoSENT方法训练,基于hfl/chinese-macbert-base在中文STS-B数据训练得到,并在中文STS-B测试集评估达到较好效果,运行examples/training_sup_text_matching_model.py代码可训练模型,模型文件已经上传HF model hub,中文通用语义匹配任务推荐使用
  • shibing624/text2vec-base-chinese-sentence模型,是用CoSENT方法训练,基于nghuyong/ernie-3.0-base-zh用人工挑选后的中文STS数据集shibing624/nli-zh-all/text2vec-base-chinese-sentence-dataset训练得到,并在中文各NLI测试集评估达到较好效果,运行examples/training_sup_text_matching_model_jsonl_data.py代码可训练模型,模型文件已经上传HF model hub,中文s2s(句子vs句子)语义匹配任务推荐使用
  • shibing624/text2vec-base-chinese-paraphrase模型,是用CoSENT方法训练,基于nghuyong/ernie-3.0-base-zh用人工挑选后的中文STS数据集shibing624/nli-zh-all/text2vec-base-chinese-paraphrase-dataset,数据集相对于shibing624/nli-zh-all/text2vec-base-chinese-sentence-dataset加入了s2p(sentence to paraphrase)数据,强化了其长文本的表征能力,并在中文各NLI测试集评估达到SOTA,运行examples/training_sup_text_matching_model_jsonl_data.py代码可训练模型,模型文件已经上传HF model hub,中文s2p(句子vs段落)语义匹配任务推荐使用
  • sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2模型是用SBERT训练,是paraphrase-MiniLM-L12-v2模型的多语言版本,支持中文、英文等
  • w2v-light-tencent-chinese是腾讯词向量的Word2Vec模型,CPU加载使用,适用于中文字面匹配任务和缺少数据的冷启动情况

Full Model Architecture

CoSENT(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_mean_tokens': True})
)

Intended uses

Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.

By default, input text longer than 256 word pieces is truncated.

Training procedure

Pre-training

We use the pretrained hfl/chinese-macbert-base model. Please refer to the model card for more detailed information about the pre-training procedure.

Fine-tuning

We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the rank loss by comparing with true pairs and false pairs.

Hyper parameters

Citing & Authors

This model was trained by text2vec.

If you find this model helpful, feel free to cite:

@software{text2vec,
  author = {Xu Ming},
  title = {text2vec: A Tool for Text to Vector},
  year = {2022},
  url = {https://github.com/shibing624/text2vec},
}