U.S. flag

An official website of the United States government

Dot gov

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Https

Secure .gov websites use HTTPS
A lock () or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Breadcrumb

  1. Home

Trojan Detection Software Challenge - nlp-sentiment-classification-mar2021-test

Round 5 Test DatasetThis is the test data used to construct and evaluate trojan detection software solutions. This data, generated at NIST, consists of natural language processing (NLP) AIs trained to perform text sentiment classification on English text. A known percentage of these trained AI models have been poisoned with a known trigger which induces incorrect behavior. This data will be used to develop software solutions for detecting which trained AI models have been poisoned via embedded triggers. This dataset consists of 504 adversarially trained, sentiment classification AI models using a small set of model architectures. The models were trained on text data drawn from movie and product reviews. Half (50%) of the models have been poisoned with an embedded trigger which causes misclassification of the images when the trigger is present. Errata: The following models were contaminated during dataset packaging. This caused nominally clean models to have a trigger. Please avoid using these models. Due to the similarity between the Round5 and Round6 datasets (both contain similarly trained sentiment classification AI models), the dataset authors suggest ignoring the Round5 data and only using the Round6 dataset. Corrupted Models: [id-00000000, id-00000003, id-00000004, id-00000005, id-00000011, id-00000022, id-00000074, id-00000076, id-00000084, id-00000091, id-00000094, id-00000147, id-00000149, id-00000156, id-00000159, id-00000162, id-00000166, id-00000168, id-00000171, id-00000176, id-00000178, id-00000216, id-00000217, id-00000220, id-00000222, id-00000223, id-00000227, id-00000233, id-00000238, id-00000239, id-00000246, id-00000290, id-00000293, id-00000301, id-00000314, id-00000323, id-00000367, id-00000368, id-00000369, id-00000372, id-00000379, id-00000388, id-00000433, id-00000438, id-00000441, id-00000447, id-00000451]

About this Dataset

Updated: 2024-02-22
Metadata Last Updated: 2021-03-26 00:00:00
Date Created: N/A
Views:
Data Provided by:
Trojan Detection; Artificial Intelligence; AI; Machine Learning; Adversarial Machine Learning;
Dataset Owner: N/A

Access this data

Contact dataset owner Access URL
Landing Page URL
Table representation of structured data
Title Trojan Detection Software Challenge - nlp-sentiment-classification-mar2021-test
Description Round 5 Test DatasetThis is the test data used to construct and evaluate trojan detection software solutions. This data, generated at NIST, consists of natural language processing (NLP) AIs trained to perform text sentiment classification on English text. A known percentage of these trained AI models have been poisoned with a known trigger which induces incorrect behavior. This data will be used to develop software solutions for detecting which trained AI models have been poisoned via embedded triggers. This dataset consists of 504 adversarially trained, sentiment classification AI models using a small set of model architectures. The models were trained on text data drawn from movie and product reviews. Half (50%) of the models have been poisoned with an embedded trigger which causes misclassification of the images when the trigger is present. Errata: The following models were contaminated during dataset packaging. This caused nominally clean models to have a trigger. Please avoid using these models. Due to the similarity between the Round5 and Round6 datasets (both contain similarly trained sentiment classification AI models), the dataset authors suggest ignoring the Round5 data and only using the Round6 dataset. Corrupted Models: [id-00000000, id-00000003, id-00000004, id-00000005, id-00000011, id-00000022, id-00000074, id-00000076, id-00000084, id-00000091, id-00000094, id-00000147, id-00000149, id-00000156, id-00000159, id-00000162, id-00000166, id-00000168, id-00000171, id-00000176, id-00000178, id-00000216, id-00000217, id-00000220, id-00000222, id-00000223, id-00000227, id-00000233, id-00000238, id-00000239, id-00000246, id-00000290, id-00000293, id-00000301, id-00000314, id-00000323, id-00000367, id-00000368, id-00000369, id-00000372, id-00000379, id-00000388, id-00000433, id-00000438, id-00000441, id-00000447, id-00000451]
Modified 2021-03-26 00:00:00
Publisher Name National Institute of Standards and Technology
Contact mailto:[email protected]
Keywords Trojan Detection; Artificial Intelligence; AI; Machine Learning; Adversarial Machine Learning;
{
    "identifier": "ark:\/88434\/mds2-2384",
    "accessLevel": "public",
    "contactPoint": {
        "hasEmail": "mailto:[email protected]",
        "fn": "Michael Paul Majurski"
    },
    "programCode": [
        "006:045"
    ],
    "@type": "dcat:Dataset",
    "landingPage": "https:\/\/data.nist.gov\/od\/id\/mds2-2384",
    "description": "Round 5 Test DatasetThis is the test data used to construct and evaluate trojan detection software solutions. This data, generated at NIST, consists of natural language processing (NLP) AIs trained to perform text sentiment classification on English text. A known percentage of these trained AI models have been poisoned with a known trigger which induces incorrect behavior. This data will be used to develop software solutions for detecting which trained AI models have been poisoned via embedded triggers. This dataset consists of 504 adversarially trained, sentiment classification AI models using a small set of model architectures. The models were trained on text data drawn from movie and product reviews. Half (50%) of the models have been poisoned with an embedded trigger which causes misclassification of the images when the trigger is present. Errata: The following models were contaminated during dataset packaging. This caused nominally clean models to have a trigger. Please avoid using these models. Due to the similarity between the Round5 and Round6 datasets (both contain similarly trained sentiment classification AI models), the dataset authors suggest ignoring the Round5 data and only using the Round6 dataset. Corrupted Models: [id-00000000, id-00000003, id-00000004, id-00000005, id-00000011, id-00000022, id-00000074, id-00000076, id-00000084, id-00000091, id-00000094, id-00000147, id-00000149, id-00000156, id-00000159, id-00000162, id-00000166, id-00000168, id-00000171, id-00000176, id-00000178, id-00000216, id-00000217, id-00000220, id-00000222, id-00000223, id-00000227, id-00000233, id-00000238, id-00000239, id-00000246, id-00000290, id-00000293, id-00000301, id-00000314, id-00000323, id-00000367, id-00000368, id-00000369, id-00000372, id-00000379, id-00000388, id-00000433, id-00000438, id-00000441, id-00000447, id-00000451]",
    "language": [
        "en"
    ],
    "title": "Trojan Detection Software Challenge - nlp-sentiment-classification-mar2021-test",
    "distribution": [
        {
            "accessURL": "https:\/\/drive.google.com\/drive\/folders\/1kRVDu5WUgWVonrHzekcEcrF6pfRzfEDB?usp=drive_link",
            "title": "nlp-sentiment-classification-mar2021-test"
        }
    ],
    "license": "https:\/\/www.nist.gov\/open\/license",
    "bureauCode": [
        "006:55"
    ],
    "modified": "2021-03-26 00:00:00",
    "publisher": {
        "@type": "org:Organization",
        "name": "National Institute of Standards and Technology"
    },
    "theme": [
        "Information Technology:Software research",
        "Information Technology:Cybersecurity",
        "Information Technology:Computational science"
    ],
    "issued": "2021-03-29",
    "keyword": [
        "Trojan Detection; Artificial Intelligence; AI; Machine Learning; Adversarial Machine Learning;"
    ]
}

Was this page helpful?