This software repository contains a python package Aegis (Active Evaluator Germane Interactive Selector) package that allows us to evaluate machine learning systems's performance (according to a metric such as accuracy) by adaptively sampling trials to label from an unlabeled test set to minimize the number of labels needed. This includes sample (public) data as well as a simulation script that tests different label-selecting strategies on already labelled test sets. This software is configured so that users can add their own data and system outputs to test evaluation.
About this Dataset
| Title | Active Evaluation Software for Selection of Ground Truth Labels |
|---|---|
| Description | This software repository contains a python package Aegis (Active Evaluator Germane Interactive Selector) package that allows us to evaluate machine learning systems's performance (according to a metric such as accuracy) by adaptively sampling trials to label from an unlabeled test set to minimize the number of labels needed. This includes sample (public) data as well as a simulation script that tests different label-selecting strategies on already labelled test sets. This software is configured so that users can add their own data and system outputs to test evaluation. |
| Modified | 2020-04-28 00:00:00 |
| Publisher Name | National Institute of Standards and Technology |
| Contact | mailto:[email protected] |
| Keywords | active evaluation , machine learning , ar |
{
"identifier": "ark:\/88434\/mds2-2227",
"accessLevel": "public",
"contactPoint": {
"hasEmail": "mailto:[email protected]",
"fn": "Peter Fontana"
},
"programCode": [
"006:045"
],
"landingPage": "https:\/\/github.com\/usnistgov\/active-evaluation",
"title": "Active Evaluation Software for Selection of Ground Truth Labels",
"description": "This software repository contains a python package Aegis (Active Evaluator Germane Interactive Selector) package that allows us to evaluate machine learning systems's performance (according to a metric such as accuracy) by adaptively sampling trials to label from an unlabeled test set to minimize the number of labels needed. This includes sample (public) data as well as a simulation script that tests different label-selecting strategies on already labelled test sets. This software is configured so that users can add their own data and system outputs to test evaluation.",
"language": [
"en"
],
"distribution": [
{
"accessURL": "https:\/\/doi.org\/10.18434\/M32227",
"title": "DOI Access for Active Evaluation Software for Selection of Ground Truth Labels"
}
],
"bureauCode": [
"006:55"
],
"modified": "2020-04-28 00:00:00",
"publisher": {
"@type": "org:Organization",
"name": "National Institute of Standards and Technology"
},
"theme": [
"Information Technology:Data and informatics"
],
"keyword": [
"active evaluation",
"machine learning",
"ar"
]
}