mitigation-image-classification-jun2024-train datasetThis is the training data used to create and evaluate trojan detection software solutions. This data, generated at NIST, consists of image classification AIs. A known percentage of these trained AI models have been poisoned with a known trigger which induces incorrect behavior. This data will be used to develop software solutions for mitigating/removing that trigger behavior from the trained AI models. This dataset consists of 288 AI models using a small set of model architectures.
About this Dataset
| Title | Trojan Detection Software Challenge - mitigation-image-classification-jun2024-train |
|---|---|
| Description | mitigation-image-classification-jun2024-train datasetThis is the training data used to create and evaluate trojan detection software solutions. This data, generated at NIST, consists of image classification AIs. A known percentage of these trained AI models have been poisoned with a known trigger which induces incorrect behavior. This data will be used to develop software solutions for mitigating/removing that trigger behavior from the trained AI models. This dataset consists of 288 AI models using a small set of model architectures. |
| Modified | 2024-06-30 00:00:00 |
| Publisher Name | National Institute of Standards and Technology |
| Contact | mailto:[email protected] |
| Keywords | Trojan Detection; Artificial Intelligence; AI; Machine Learning; Adversarial Machine Learning; |
{
"identifier": "ark:\/88434\/mds2-3653",
"accessLevel": "public",
"contactPoint": {
"hasEmail": "mailto:[email protected]",
"fn": "Michael Paul Majurski"
},
"programCode": [
"006:045"
],
"landingPage": "https:\/\/data.nist.gov\/od\/id\/mds2-3653",
"title": "Trojan Detection Software Challenge - mitigation-image-classification-jun2024-train",
"description": "mitigation-image-classification-jun2024-train datasetThis is the training data used to create and evaluate trojan detection software solutions. This data, generated at NIST, consists of image classification AIs. A known percentage of these trained AI models have been poisoned with a known trigger which induces incorrect behavior. This data will be used to develop software solutions for mitigating\/removing that trigger behavior from the trained AI models. This dataset consists of 288 AI models using a small set of model architectures.",
"language": [
"en"
],
"distribution": [
{
"accessURL": "https:\/\/drive.google.com\/drive\/folders\/1VJPQgyydbOifr0UXO2eybxZBN1a5uY6f?usp=sharing",
"title": "mitigation-image-classification-jun2024-train"
}
],
"bureauCode": [
"006:55"
],
"modified": "2024-06-30 00:00:00",
"publisher": {
"@type": "org:Organization",
"name": "National Institute of Standards and Technology"
},
"theme": [
"Information Technology:Software research",
"Information Technology:Cybersecurity"
],
"keyword": [
"Trojan Detection; Artificial Intelligence; AI; Machine Learning; Adversarial Machine Learning;"
]
}