U.S. flag

An official website of the United States government

Dot gov

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Https

Secure .gov websites use HTTPS
A lock () or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Breadcrumb

  1. Home

Trojan Detection Software Challenge - image-classification-dec2020-train

Round 3 Training DatasetThe data being generated and disseminated is the training data used to construct trojan detection software solutions. This data, generated at NIST, consists of human level AIs trained to perform image classification. A known percentage of these trained AI models have been poisoned with a known trigger which induces incorrect behavior. This data will be used to develop software solutions for detecting which trained AI models have been poisoned via embedded triggers. This dataset consists of 1008 adversarially trained, human level, image classification AI models using a variety of model architectures. The models were trained on synthetically created image data of non-real traffic signs superimposed on road background scenes. Half (50%) of the models have been poisoned with an embedded trigger which causes misclassification of the images when the trigger is present.

About this Dataset

Updated: 2024-02-22
Metadata Last Updated: 2020-10-23 00:00:00
Date Created: N/A
Views:
Data Provided by:
Trojan Detection; Artificial Intelligence; AI; Machine Learning; Adversarial Machine Learning;
Dataset Owner: N/A

Access this data

Contact dataset owner Access URL
Landing Page URL
Table representation of structured data
Title Trojan Detection Software Challenge - image-classification-dec2020-train
Description Round 3 Training DatasetThe data being generated and disseminated is the training data used to construct trojan detection software solutions. This data, generated at NIST, consists of human level AIs trained to perform image classification. A known percentage of these trained AI models have been poisoned with a known trigger which induces incorrect behavior. This data will be used to develop software solutions for detecting which trained AI models have been poisoned via embedded triggers. This dataset consists of 1008 adversarially trained, human level, image classification AI models using a variety of model architectures. The models were trained on synthetically created image data of non-real traffic signs superimposed on road background scenes. Half (50%) of the models have been poisoned with an embedded trigger which causes misclassification of the images when the trigger is present.
Modified 2020-10-23 00:00:00
Publisher Name National Institute of Standards and Technology
Contact mailto:[email protected]
Keywords Trojan Detection; Artificial Intelligence; AI; Machine Learning; Adversarial Machine Learning;
{
    "identifier": "ark:\/88434\/mds2-2320",
    "accessLevel": "public",
    "contactPoint": {
        "hasEmail": "mailto:[email protected]",
        "fn": "Michael Paul Majurski"
    },
    "programCode": [
        "006:045"
    ],
    "@type": "dcat:Dataset",
    "landingPage": "https:\/\/data.nist.gov\/od\/id\/mds2-2320",
    "description": "Round 3 Training DatasetThe data being generated and disseminated is the training data used to construct trojan detection software solutions. This data, generated at NIST, consists of human level AIs trained to perform image classification. A known percentage of these trained AI models have been poisoned with a known trigger which induces incorrect behavior. This data will be used to develop software solutions for detecting which trained AI models have been poisoned via embedded triggers. This dataset consists of 1008 adversarially trained, human level, image classification AI models using a variety of model architectures. The models were trained on synthetically created image data of non-real traffic signs superimposed on road background scenes. Half (50%) of the models have been poisoned with an embedded trigger which causes misclassification of the images when the trigger is present.",
    "language": [
        "en"
    ],
    "title": "Trojan Detection Software Challenge - image-classification-dec2020-train",
    "distribution": [
        {
            "accessURL": "https:\/\/drive.google.com\/drive\/folders\/1jKq-BWGZwSa_Zp73aiDqsJxqFaJa6jwJ?usp=drive_link",
            "title": "image-classification-dec2020-train"
        }
    ],
    "license": "https:\/\/www.nist.gov\/open\/license",
    "bureauCode": [
        "006:55"
    ],
    "modified": "2020-10-23 00:00:00",
    "publisher": {
        "@type": "org:Organization",
        "name": "National Institute of Standards and Technology"
    },
    "theme": [
        "Information Technology:Software research",
        "Information Technology:Cybersecurity",
        "Information Technology:Computational science"
    ],
    "issued": "2020-10-30",
    "keyword": [
        "Trojan Detection; Artificial Intelligence; AI; Machine Learning; Adversarial Machine Learning;"
    ]
}

Was this page helpful?