ML model evaluation

metric

ML-R1.7-02M: Metadata describes ML model evaluation. Test: 1) The result of the learning process is explained and described.

Principle: R

Rationale: Ml model evaluation should be described with all notable indicators, such as confusion matrix, F1 score, Area Under the ROC Curve, etc. The indicators should demonstrate and describe the quality and performance of the model. This is a disciplinary requirement.

FAIR Metrics: R1.7

View Assessments

Associated Rubrics (1)

FAIR for Machine Learning Models

This rubric consists of assessment metrics that evaluate the FAIR maturity of ML models. The metrics...

FAIR machine learning model FAIR assessment NFDI4DataScience