
FAIR for Machine Learning Models
This rubric consists of assessment metrics that evaluate the FAIR maturity of ML models. The metrics are proposed based on relevant and well-established initiatives. The metrics of this rubric rely on a hybrid assessment method since they contain both manual and automated assessment metrics. In this way, the results from the automated (conducted via F-UJI) and the manual assessments are included in the same (FAIRshake) evaluation rubric and form the overall FAIR assessment score for an ML model.
License: https://creativecommons.org/licenses/by/4.0/
Tags: FAIR machine learning model FAIR assessment NFDI4DataScience
URL(s):
View AssessmentsAssociated Metrics (27)
Associated Digital Objects (6)
WordPair-CNN
anyCode repository for discourse relation prediction using word pair CNNs.
German Zeroshot
This model has GBERT Large as base model and fine-tuned it on xnli de dataset.
German BERT large
A German BERT language model trained collaboratively by the makers of the original German BERT (aka ...
Fine-mixing: Mitigating Backdoors in Fine-tuned Language Models
Deep Neural Networks (DNNs) are known to be vulnerable to backdoor attacks. In Natural Language Proc...
XLM-RoBERTa (base-sized model)
XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was...
RoBERTa
The RoBERTa model was proposed in RoBERTa: A Robustly Optimized BERT Pretraining Approach by Yinhan ...