A RoBERTa-base trained on ~124M tweets from January 2018 to December 2021, and finetuned for sentiment analysis with the TweetEval benchmark. This model is suitable for English.
A RoBERTa-base trained on ~124M tweets from January 2018 to December 2021, and finetuned for sentiment analysis with the TweetEval benchmark. This model is suitable for English.
text to classify
You need to login to use this model
POSITIVE (1.00)
NEGATIVE (0.00)