TrojAI llm-pretrain-apr2024 Train DatasetThis is the training data used to create and evaluate trojan detection software solutions. This data, generated at NIST, consists Llama2 Large Language Models refined using fine-tuning and LoRA to perform next token prediction. A known percentage of these trained AI models have been poisoned with triggers which induces modified behavior. This data will be used to develop software solutions for detecting which trained AI models have been poisoned via embedded triggers into the model weights.
About this Dataset
Title | Trojan Detection Software Challenge - llm-pretrain-apr2024-train |
---|---|
Description | TrojAI llm-pretrain-apr2024 Train DatasetThis is the training data used to create and evaluate trojan detection software solutions. This data, generated at NIST, consists Llama2 Large Language Models refined using fine-tuning and LoRA to perform next token prediction. A known percentage of these trained AI models have been poisoned with triggers which induces modified behavior. This data will be used to develop software solutions for detecting which trained AI models have been poisoned via embedded triggers into the model weights. |
Modified | 2024-04-16 00:00:00 |
Publisher Name | National Institute of Standards and Technology |
Contact | mailto:[email protected] |
Keywords | Trojan Detection; Artificial Intelligence; AI; Machine Learning; Adversarial Machine Learning; |
{ "identifier": "ark:\/88434\/mds2-3235", "accessLevel": "public", "contactPoint": { "hasEmail": "mailto:[email protected]", "fn": "Michael Paul Majurski" }, "programCode": [ "006:045" ], "landingPage": "", "title": "Trojan Detection Software Challenge - llm-pretrain-apr2024-train", "description": "TrojAI llm-pretrain-apr2024 Train DatasetThis is the training data used to create and evaluate trojan detection software solutions. This data, generated at NIST, consists Llama2 Large Language Models refined using fine-tuning and LoRA to perform next token prediction. A known percentage of these trained AI models have been poisoned with triggers which induces modified behavior. This data will be used to develop software solutions for detecting which trained AI models have been poisoned via embedded triggers into the model weights.", "language": [ "en" ], "distribution": [ { "accessURL": "https:\/\/drive.google.com\/drive\/folders\/1eI7MsVi1qqSHvnfCUWkgNnphTk0Cth5M?usp=sharing", "title": "llm-pretrain-apr2024-train" } ], "bureauCode": [ "006:55" ], "modified": "2024-04-16 00:00:00", "publisher": { "@type": "org:Organization", "name": "National Institute of Standards and Technology" }, "theme": [ "Information Technology:Software research", "Information Technology:Cybersecurity" ], "keyword": [ "Trojan Detection; Artificial Intelligence; AI; Machine Learning; Adversarial Machine Learning;" ] }