-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(docs): add cheatseet for ML07 #207
Open
aryanxk02
wants to merge
2
commits into
OWASP:master
Choose a base branch
from
aryanxk02:cheatsheet
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from 1 commit
Commits
Show all changes
2 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,47 @@ | ||
## ML01:2023 Input Manipulation Attack | ||
|
||
Input Manipulation Attacks involve changing input data to trick models, with Adversarial Attacks being a key tactic. Prevention methods include training models with deceptive examples (Adversarial Training), using robust models resistant to manipulation, and employing input validation to detect and reject potentially harmful inputs. | ||
|
||
|
||
## ML02:2023 Data Poisoning Attack | ||
Data poisoning attacks involve manipulating training data to influence model behavior negatively. Prevention methods include thorough validation and verification of training data, secure storage practices, data separation, access controls, monitoring, auditing, model validation with separate sets, model ensembles, and anomaly detection to identify abnormal behavior. | ||
|
||
|
||
|
||
## ML03:2023 Model Inversion Attack | ||
|
||
Model inversion attacks involve extracting information from models by reverse-engineering them. Prevention includes restricting access, validating inputs, ensuring model transparency, regular monitoring, and retraining models. Vigilant implementation of these measures is crucial to safeguard against such attacks. | ||
|
||
|
||
## ML04:2023 Membership Inference Attack | ||
|
||
Membership inference attacks involve manipulating a model's training data to expose sensitive information. Prevention methods include training models on randomized data, obfuscating predictions with noise or differential privacy, regularization techniques, reducing training data size, and testing and monitoring for anomalies to thwart such attacks. | ||
|
||
|
||
## ML05:2023 Model Theft | ||
|
||
Model theft attacks involve unauthorized access to a model's parameters. Prevention methods include encryption of sensitive information, strict access controls, regular backups, code obfuscation, watermarking, legal protection, and monitoring/auditing to detect and prevent theft attempts. | ||
|
||
## ML06:2023 AI Supply Chain Attacks | ||
|
||
AI Supply Chain Attacks involve tampering with machine learning libraries or models used by a system, including associated data. Prevention involves verifying package signatures, using secure repositories like Anaconda, keeping packages updated, employing virtual environments, conducting code reviews, utilizing package verification tools like PEP 476, Secure Package Install, and educating developers on the risks. | ||
|
||
|
||
## ML07:2023 Transfer Learning Attack | ||
|
||
Transfer learning attacks involve training a model on one task and fine-tuning it on another to cause undesirable behavior. Prevention methods include monitoring and updating training datasets regularly, using secure and trusted datasets, implementing model isolation, employing differential privacy, and conducting regular security audits to identify and address vulnerabilities. | ||
|
||
|
||
## ML08:2023 Model Skewing | ||
|
||
Model skewing attacks involve manipulating the distribution of training data to induce undesirable model behavior. Prevention strategies include implementing robust access controls, verifying the authenticity of feedback data, employing data validation and cleaning techniques, implementing anomaly detection, regularly monitoring model performance, and continuously training the model with updated and verified data. | ||
|
||
|
||
## ML09:2023 Output Integrity Attack | ||
|
||
In an Output Integrity Attack, an attacker aims to manipulate a machine learning model's output to cause harm. Prevention methods include using cryptographic techniques for result authenticity verification, securing communication channels, input validation, maintaining tamper-evident logs, regular software updates, and monitoring and auditing for suspicious activities. | ||
|
||
|
||
## ML10:2023 Model Poisoning | ||
|
||
Model poisoning attacks involve manipulating a model's parameters to induce undesirable behavior. Prevention methods include regularization techniques to mitigate overfitting, designing robust model architectures and activation functions, and employing cryptographic techniques to secure model parameters from unauthorized access or manipulation. |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now, let's hold off on the summaries. I appreciate the ownership but work still needs to be done on the core docs. I think once that is complete, we will just lift the Description of each one into the respective summaries.