FAIR for Machine Learning Models

This rubric consists of assessment metrics that evaluate the FAIR maturity of ML models. The metrics are proposed based on relevant and well-established initiatives. The metrics of this rubric rely on a hybrid assessment method since they contain both manual and automated assessment metrics. In this way, the results from the automated (conducted via F-UJI) and the manual assessments are included in the same (FAIRshake) evaluation rubric and form the overall FAIR assessment score for an ML model.

License: https://creativecommons.org/licenses/by/4.0/

Tags: FAIR machine learning model FAIR assessment NFDI4DataScience

URL(s):

View Assessments

Associated Metrics (27)

ML model training

yesnobut

ML-R1.7-01M: Metadata describes ML model training process. Test: 1) The learning process is expla...

ML model evaluation

yesnobut

ML-R1.7-02M: Metadata describes ML model evaluation. Test: 1) The result of the learning process...

References to other ML models

yesnobut

ML-R2-01M: Metadata contains references to other ML models. Test: 1) Relations to ML models are ...

ML model metadata standards

yesnobut

ML-R3-01M: Metadata follows a standard recommended by the target research community of the ML model....

ML model file format

yesnobut

ML-R3-02S: ML model is available in a file format recommended by the ML community. Test: 1) The f...


Associated Digital Objects (6)

WordPair-CNN

any

Code repository for discourse relation prediction using word pair CNNs.

German Zeroshot

This model has GBERT Large as base model and fine-tuned it on xnli de dataset.

Zero-Shot Classification Transformers PyTorch JAX xnli multilingual bert text-classification nli de Inference Endpoints

German BERT large

A German BERT language model trained collaboratively by the makers of the original German BERT (aka ...

Fill-Mask Transformers PyTorch TensorFlow Safetensors 4 datasets German Inference Endpoints

Fine-mixing: Mitigating Backdoors in Fine-tuned Language Models

Deep Neural Networks (DNNs) are known to be vulnerable to backdoor attacks. In Natural Language Proc...

XLM-RoBERTa (base-sized model)

XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was...

Fill-Mask Transformers PyTorch TensorFlow JAX ONNX Safetensors 94 languages xlm-roberta exbert Inference Endpoints arxiv: 1911.02116 License: mit

RoBERTa

The RoBERTa model was proposed in RoBERTa: A Robustly Optimized BERT Pretraining Approach by Yinhan ...

Text Classification Token Classification Fill-Mask Question Answering