Round 9 Test DatasetThis is the test data used to evaluate trojan detection software solutions. This data, generated at NIST, consists of natural language processing (NLP) AIs trained to perform one of three tasks, sentiment classification, named entity recognition, or extractive question answering on English text. A known percentage of these trained AI models have been poisoned with a known trigger which induces incorrect behavior. This data will be used to develop software solutions for detecting which trained AI models have been poisoned via embedded triggers. This dataset consists of 210 Sentiment Classification, Named Entity Recognition, and Extractive Question Answering AI models using a small set of model architectures. Half (50%) of the models have been poisoned with an embedded trigger which causes misclassification of the input when the trigger is present.
About this Dataset
Title | Trojan Detection Software Challenge - nlp-summary-jan2022-test |
---|---|
Description | Round 9 Test DatasetThis is the test data used to evaluate trojan detection software solutions. This data, generated at NIST, consists of natural language processing (NLP) AIs trained to perform one of three tasks, sentiment classification, named entity recognition, or extractive question answering on English text. A known percentage of these trained AI models have been poisoned with a known trigger which induces incorrect behavior. This data will be used to develop software solutions for detecting which trained AI models have been poisoned via embedded triggers. This dataset consists of 210 Sentiment Classification, Named Entity Recognition, and Extractive Question Answering AI models using a small set of model architectures. Half (50%) of the models have been poisoned with an embedded trigger which causes misclassification of the input when the trigger is present. |
Modified | 2022-01-31 00:00:00 |
Publisher Name | National Institute of Standards and Technology |
Contact | mailto:[email protected] |
Keywords | Trojan Detection; Artificial Intelligence; AI; Machine Learning; Adversarial Machine Learning; |
{ "identifier": "ark:\/88434\/mds2-2781", "accessLevel": "public", "contactPoint": { "hasEmail": "mailto:[email protected]", "fn": "Michael Paul Majurski" }, "programCode": [ "006:045" ], "landingPage": "https:\/\/data.nist.gov\/od\/id\/mds2-2781", "title": "Trojan Detection Software Challenge - nlp-summary-jan2022-test", "description": "Round 9 Test DatasetThis is the test data used to evaluate trojan detection software solutions. This data, generated at NIST, consists of natural language processing (NLP) AIs trained to perform one of three tasks, sentiment classification, named entity recognition, or extractive question answering on English text. A known percentage of these trained AI models have been poisoned with a known trigger which induces incorrect behavior. This data will be used to develop software solutions for detecting which trained AI models have been poisoned via embedded triggers. This dataset consists of 210 Sentiment Classification, Named Entity Recognition, and Extractive Question Answering AI models using a small set of model architectures. Half (50%) of the models have been poisoned with an embedded trigger which causes misclassification of the input when the trigger is present.", "language": [ "en" ], "distribution": [ { "accessURL": "https:\/\/drive.google.com\/drive\/folders\/1voAHxVT1wfAKfhlqBkIXERjWFHmgs6gy?usp=drive_link", "title": "nlp-summary-jan2022-test" } ], "bureauCode": [ "006:55" ], "modified": "2022-01-31 00:00:00", "publisher": { "@type": "org:Organization", "name": "National Institute of Standards and Technology" }, "theme": [ "Information Technology:Software research" ], "keyword": [ "Trojan Detection; Artificial Intelligence; AI; Machine Learning; Adversarial Machine Learning;" ] }