hfl/chinese-roberta-wwm-ext cover image

hfl/chinese-roberta-wwm-ext

We present Chinese pre-trained BERT with Whole Word Masking, which is an extension of the original BERT model tailored for Chinese natural language processing tasks. This variant uses whole word masking instead of subword tokenization to improve performance on out-of-vocabulary words and enhance language understanding capabilities.

We present Chinese pre-trained BERT with Whole Word Masking, which is an extension of the original BERT model tailored for Chinese natural language processing tasks. This variant uses whole word masking instead of subword tokenization to improve performance on out-of-vocabulary words and enhance language understanding capabilities.

Public
$0.0005 / sec
demoapi

5c58d0b8ec1d9014354d691c538661bf00bfdb44

2023-03-03T03:38:47+00:00