Davlan/bert-base-multilingual-cased-ner-hrl cover image

Davlan/bert-base-multilingual-cased-ner-hrl

A named entity recognition model for 10 high-resource languages, trained on a fine-tuned mBERT base model. The model recognizes three types of entities: location, organization, and person. The training data consists of entity-annotated news articles from various datasets for each language, and the model distinguishes between the beginning and continuation of an entity.

A named entity recognition model for 10 high-resource languages, trained on a fine-tuned mBERT base model. The model recognizes three types of entities: location, organization, and person. The training data consists of entity-annotated news articles from various datasets for each language, and the model distinguishes between the beginning and continuation of an entity.

Public
$0.0005 / sec
Web inference not supported yet, please check API tab

bert-base-multilingual-cased-ner-hrl

Model description

bert-base-multilingual-cased-ner-hrl is a Named Entity Recognition model for 10 high resourced languages (Arabic, German, English, Spanish, French, Italian, Latvian, Dutch, Portuguese and Chinese) based on a fine-tuned mBERT base model. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER). Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on an aggregation of 10 high-resourced languages

Intended uses & limitations

How to use

You can use this model with Transformers pipeline for NER.

from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/bert-base-multilingual-cased-ner-hrl")
model = AutoModelForTokenClassification.from_pretrained("Davlan/bert-base-multilingual-cased-ner-hrl")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Nader Jokhadar had given Syria the lead with a well-struck header in the seventh minute."
ner_results = nlp(example)
print(ner_results)

Limitations and bias

This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.

Training data

The training data for the 10 languages are from:

LanguageDataset
ArabicANERcorp
Germanconll 2003
Englishconll 2003
Spanishconll 2002
FrenchEuropeana Newspapers
ItalianItalian I-CAB
LatvianLatvian NER
Dutchconll 2002
PortugueseParamopama + Second Harem
ChineseMSRA

The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:

AbbreviationDescription
OOutside of a named entity
B-PERBeginning of a person’s name right after another person’s name
I-PERPerson’s name
B-ORGBeginning of an organisation right after another organisation
I-ORGOrganisation
B-LOCBeginning of a location right after another location
I-LOCLocation

Training procedure

This model was trained on NVIDIA V100 GPU with recommended hyperparameters from HuggingFace code.