Adversarial Perturbation | Modifying input data to intentionally cause a machine learning model to make an incorrect prediction. | Financial fraud, physical harm | Reinforcing adversarial robustness:Implement techniques such as adversarial training and regularization to make machine learning models more resistant to adversarial attacks. | AC-6 (Access Control):Implement strong access controls to limit access to machine learning models and data. | Remote Elevation of Privilege | Critical to Important |
Data Poisoning | Introducing malicious data into the training data of a machine learning model to manipulate its predictions. | Erroneous decision-making, unfair outcomes | Anomaly sensors:Implement anomaly detection systems to identify and flag suspicious data points. | PR-3 (Protect Information):Implement data sanitization and validation procedures to ensure the integrity of data used in machine learning models. | Trojaned host, Authenticated Denial of Service | Critical to Important |
Model Inversion Attacks | Inferring private information from a machine learning model's inputs or outputs. | Privacy violations, identity theft | Strong access control:Implement strong access controls to limit access to sensitive data used in machine learning models. | AC-5 (Identity and Access Management):Implement strong authentication and authorization mechanisms to control access to machine learning models and data. | Targeted covert Information Disclosure | Important to Critical |
Membership Inference Attack | Determining whether a particular data point was used to train a machine learning model. | Privacy violations, targeted attacks | Differential Privacy:Implement differential privacy techniques to protect the privacy of individuals while allowing for machine learning. | SC-8 (Supply Chain Risk Management):Implement secure procurement and supply chain management practices to ensure the integrity of third-party components used in machine learning models. | Data Privacy | Privacy issue, not a security issue |
Model Stealing | Replicating or stealing a machine learning model to use it for malicious purposes. | Intellectual property theft, unauthorized access to sensitive data | Minimize details in prediction APIs: Limit the information exposed through prediction APIs to reduce the risk of model theft. | RA-3 (Risk Assessment):Conduct regular risk assessments to identify and address potential vulnerabilities in machine learning models and systems. | Unauthenticated read-only tampering of system data | Important to Moderate |
Neural Net Reprogramming | Modifying a machine learning model after it has been deployed to alter its behavior. | Erroneous decision-making, unauthorized access to sensitive data | Strong mutual authentication and access control:Implement strong mutual authentication and access control mechanisms to protect machine learning models from unauthorized modification. | IR-1 (Incident Identification):Implement incident detection and response capabilities to identify and respond to attacks on machine learning models. | Abuse scenario | Important to Critical |
Adversarial Example in the Physical Domain | Exploiting vulnerabilities in physical systems using adversarial examples. | Physical harm, property damage | Traditional security practices in data/algorithm layer: Implement traditional security practices, such as input validation and data sanitization, to protect against adversarial examples. | PR-1 (Protect Physical Assets):Implement physical security measures to protect machine learning systems and data from unauthorized access. | Elevation of Privilege, remote code execution | Critical |
|