This paper presents a fine-tuned Spanish BERT model (BETO) for the Named Entity Recognition (NER) task. The model was trained on the CONLL Corpora ES dataset and achieved an F1 score of 90.17%. The authors also compared their model with other state-of-the-art models, including a multilingual BERT and a TinyBERT model, and demonstrated its effectiveness in identifying entities in Spanish text.
This paper presents a fine-tuned Spanish BERT model (BETO) for the Named Entity Recognition (NER) task. The model was trained on the CONLL Corpora ES dataset and achieved an F1 score of 90.17%. The authors also compared their model with other state-of-the-art models, including a multilingual BERT and a TinyBERT model, and demonstrated its effectiveness in identifying entities in Spanish text.