This is the training data used to create and evaluate trojan detection software solutions. This data, generated at NIST, consists of models trained to predict whether code from public git repositories would survive in its branch for one month or more as a quantifiable proxy for code quality. A known percentage of these trained AI models have been poisoned with a known trigger which induces incorrect behavior. This data will be used to develop software solutions for detecting that trigger behavior in the trained AI models.
About this Dataset
Title | Trojan Detection Software Challenge - cyber-git-dec2024-train |
---|---|
Description | This is the training data used to create and evaluate trojan detection software solutions. This data, generated at NIST, consists of models trained to predict whether code from public git repositories would survive in its branch for one month or more as a quantifiable proxy for code quality. A known percentage of these trained AI models have been poisoned with a known trigger which induces incorrect behavior. This data will be used to develop software solutions for detecting that trigger behavior in the trained AI models. |
Modified | 2024-10-30 00:00:00 |
Publisher Name | National Institute of Standards and Technology |
Contact | mailto:[email protected] |
Keywords | Trojan Detection; Artificial Intelligence; AI; Machine Learning; Adversarial Machine Learning; |
{ "identifier": "ark:\/88434\/mds2-3659", "accessLevel": "public", "contactPoint": { "hasEmail": "mailto:[email protected]", "fn": "Michael Paul Majurski" }, "programCode": [ "006:045" ], "landingPage": "https:\/\/data.nist.gov\/od\/id\/mds2-3659", "title": "Trojan Detection Software Challenge - cyber-git-dec2024-train", "description": "This is the training data used to create and evaluate trojan detection software solutions. This data, generated at NIST, consists of models trained to predict whether code from public git repositories would survive in its branch for one month or more as a quantifiable proxy for code quality. A known percentage of these trained AI models have been poisoned with a known trigger which induces incorrect behavior. This data will be used to develop software solutions for detecting that trigger behavior in the trained AI models.", "language": [ "en" ], "distribution": [ { "accessURL": "https:\/\/drive.google.com\/file\/d\/1_uLyou-nD0DeisrD2yKFOGr5-SaArlnI\/view?usp=drive_link", "title": "cyber-git-dec2024-train" } ], "bureauCode": [ "006:55" ], "modified": "2024-10-30 00:00:00", "publisher": { "@type": "org:Organization", "name": "National Institute of Standards and Technology" }, "theme": [ "Information Technology:Software research", "Information Technology:Cybersecurity" ], "keyword": [ "Trojan Detection; Artificial Intelligence; AI; Machine Learning; Adversarial Machine Learning;" ] }