Round 1 Training DatasetThe data being generated and disseminated is the training data used to construct trojan detection software solutions. This data, generated at NIST, consists of human level AIs trained to perform a variety of tasks (image classification, natural language processing, etc.). A known percentage of these trained AI models have been poisoned with a known trigger which induces incorrect behavior. This data will be used to develop software solutions for detecting which trained AI models have been poisoned via embedded triggers. This dataset consists of 1000 trained, human level, image classification AI models using the following architectures (Inception-v3, DenseNet-121, and ResNet50). The models were trained on synthetically created image data of non-real traffic signs superimposed on road background scenes. Half (50%) of the models have been poisoned with an embedded trigger which causes misclassification of the images when the trigger is present. Errata: This dataset had a software bug in the trigger embedding code that caused 4 models trained for this dataset to have a ground truth value of 'poisoned' but which did not contain any triggers embedded. These models should not be used. Models Without a Trigger Embedded: id-00000184 id-00000599 id-00000858 id-00001088 Google Drive Mirror: https://drive.google.com/open?id=1uwVt3UCRL2fCX9Xvi2tLoz_z-DwbU6Ce
About this Dataset
Title | Trojan Detection Software Challenge - image-classification-jun2020-train |
---|---|
Description | Round 1 Training DatasetThe data being generated and disseminated is the training data used to construct trojan detection software solutions. This data, generated at NIST, consists of human level AIs trained to perform a variety of tasks (image classification, natural language processing, etc.). A known percentage of these trained AI models have been poisoned with a known trigger which induces incorrect behavior. This data will be used to develop software solutions for detecting which trained AI models have been poisoned via embedded triggers. This dataset consists of 1000 trained, human level, image classification AI models using the following architectures (Inception-v3, DenseNet-121, and ResNet50). The models were trained on synthetically created image data of non-real traffic signs superimposed on road background scenes. Half (50%) of the models have been poisoned with an embedded trigger which causes misclassification of the images when the trigger is present. Errata: This dataset had a software bug in the trigger embedding code that caused 4 models trained for this dataset to have a ground truth value of 'poisoned' but which did not contain any triggers embedded. These models should not be used. Models Without a Trigger Embedded: id-00000184 id-00000599 id-00000858 id-00001088 Google Drive Mirror: https://drive.google.com/open?id=1uwVt3UCRL2fCX9Xvi2tLoz_z-DwbU6Ce |
Modified | 2020-03-18 00:00:00 |
Publisher Name | National Institute of Standards and Technology |
Contact | mailto:[email protected] |
Keywords | Trojan Detection; Artificial Intelligence; AI; Machine Learning; Adversarial Machine Learning; |
{ "identifier": "ark:\/88434\/mds2-2195", "accessLevel": "public", "contactPoint": { "hasEmail": "mailto:[email protected]", "fn": "Michael Paul Majurski" }, "programCode": [ "006:045" ], "landingPage": "https:\/\/data.nist.gov\/od\/id\/mds2-2195", "title": "Trojan Detection Software Challenge - image-classification-jun2020-train", "description": "Round 1 Training DatasetThe data being generated and disseminated is the training data used to construct trojan detection software solutions. This data, generated at NIST, consists of human level AIs trained to perform a variety of tasks (image classification, natural language processing, etc.). A known percentage of these trained AI models have been poisoned with a known trigger which induces incorrect behavior. This data will be used to develop software solutions for detecting which trained AI models have been poisoned via embedded triggers. This dataset consists of 1000 trained, human level, image classification AI models using the following architectures (Inception-v3, DenseNet-121, and ResNet50). The models were trained on synthetically created image data of non-real traffic signs superimposed on road background scenes. Half (50%) of the models have been poisoned with an embedded trigger which causes misclassification of the images when the trigger is present. Errata: This dataset had a software bug in the trigger embedding code that caused 4 models trained for this dataset to have a ground truth value of 'poisoned' but which did not contain any triggers embedded. These models should not be used. Models Without a Trigger Embedded: id-00000184 id-00000599 id-00000858 id-00001088 Google Drive Mirror: https:\/\/drive.google.com\/open?id=1uwVt3UCRL2fCX9Xvi2tLoz_z-DwbU6Ce", "language": [ "en" ], "distribution": [ { "accessURL": "https:\/\/drive.google.com\/drive\/folders\/1sE6EErrDn_2xq1sh3xPYCzpMu50mGEPi?usp=drive_link", "title": "image-classification-jun2020-train" } ], "bureauCode": [ "006:55" ], "modified": "2020-03-18 00:00:00", "publisher": { "@type": "org:Organization", "name": "National Institute of Standards and Technology" }, "theme": [ "Information Technology:Computational science" ], "keyword": [ "Trojan Detection; Artificial Intelligence; AI; Machine Learning; Adversarial Machine Learning;" ] }