A RoBERTa-base trained on ~124M tweets from January 2018 to December 2021, and finetuned for sentiment analysis with the TweetEval benchmark. This model is suitable for English.
A RoBERTa-base trained on ~124M tweets from January 2018 to December 2021, and finetuned for sentiment analysis with the TweetEval benchmark. This model is suitable for English.
b636d90b2ed53d7ba6006cefd76f29cd354dd9da
2023-12-04T13:48:45+00:00