XLM-RoBERTa (base-sized model)

XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper Unsupervised Cross-lingual Representation Learning at Scale by Conneau et al. and first released in this repository. Disclaimer: The team releasing XLM-RoBERTa did not write a model card for this model so this model card has been written by the Hugging Face team.

Tags: Fill-Mask Transformers PyTorch TensorFlow JAX ONNX Safetensors 94 languages xlm-roberta exbert Inference Endpoints arxiv: 1911.02116 License: mit

URL(s):

View Assessments

Associated Projects (1)

FAIR for NFDI4DataScience

This project contains the FAIR assessments of several artifact types from NFDI4DataScience. The metr...

FAIR assessment NFDI4DataScience

Associated Rubrics (1)

FAIR for Machine Learning Models

This rubric consists of assessment metrics that evaluate the FAIR maturity of ML models. The metrics...

FAIR machine learning model FAIR assessment NFDI4DataScience