A pre-trained language model developed specifically for protein sequences using a masked language modeling (MLM) objective. It achieved impressive results when fine-tuned on downstream tasks such as secondary structure prediction and sub-cellular localization. The model was trained on uppercase amino acids only and used a vocabulary size of 21, with inputs of the form "[CLS] Protein Sequence A [SEP] Protein Sequence B [SEP]"
A pre-trained language model developed specifically for protein sequences using a masked language modeling (MLM) objective. It achieved impressive results when fine-tuned on downstream tasks such as secondary structure prediction and sub-cellular localization. The model was trained on uppercase amino acids only and used a vocabulary size of 21, with inputs of the form "[CLS] Protein Sequence A [SEP] Protein Sequence B [SEP]"
text prompt, should include exactly one [MASK] token
You need to login to use this model
where is my father? (0.09)
where is my mother? (0.08)