RoBERTa

The RoBERTa model was proposed in RoBERTa: A Robustly Optimized BERT Pretraining Approach by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. It is based on Google’s BERT model released in 2018. It builds on BERT and modifies key hyperparameters, removing the next-sentence pretraining objective and training with much larger mini-batches and learning rates.

Tags: Text Classification Token Classification Fill-Mask Question Answering

URL(s):

View Assessments

Associated Projects (1)

FAIR for NFDI4DataScience

This project contains the FAIR assessments of several artifact types from NFDI4DataScience. The metr...

FAIR assessment NFDI4DataScience

Associated Rubrics (1)

FAIR for Machine Learning Models

This rubric consists of assessment metrics that evaluate the FAIR maturity of ML models. The metrics...

FAIR machine learning model FAIR assessment NFDI4DataScience