We present Chinese pre-trained BERT with Whole Word Masking, which is an extension of the original BERT model tailored for Chinese natural language processing tasks. This variant uses whole word masking instead of subword tokenization to improve performance on out-of-vocabulary words and enhance language understanding capabilities.
We present Chinese pre-trained BERT with Whole Word Masking, which is an extension of the original BERT model tailored for Chinese natural language processing tasks. This variant uses whole word masking instead of subword tokenization to improve performance on out-of-vocabulary words and enhance language understanding capabilities.
text prompt, should include exactly one [MASK] token
You need to login to use this model
where is my father? (0.09)
where is my mother? (0.08)