diff --git a/tab_glossary.md b/tab_glossary.md index 7550ed5..ddaf5f8 100644 --- a/tab_glossary.md +++ b/tab_glossary.md @@ -56,7 +56,9 @@ tags: related-tag ## A {#a} -[]() + +[Adversarial attack](#adversarial_attack) +Type of attack which seeks to trick machine learning models into misclassifying inputs by maliciously tampering with input data ## B {#b} @@ -64,15 +66,21 @@ tags: related-tag ## C {#c} -[]() +[Classification](#classification) +Process of arranging things in groups which are distinct from each other, and are separated by clearly determined lines of demarcation ## D {#d} -[]() +[Data labeling](#data_labeling) +Process of assigning tags or categories to each data point in a dataset + +[Data poisoning](#data_poisoning) +Type of attack that inject poisoning samples into the data ## E {#e} -[]() +[Ensemble](#ensemble) +See: [Model Ensemble](#model_ensemble) ## F {#f} @@ -88,7 +96,11 @@ tags: related-tag ## I {#i} -[]() +[Intrusion Detection Systems (IDS)](#ids) +Security service that monitors and analyzes network or system events for the purpose of finding, and providing real-time or near real-time warning of, attempts to access system resources in an unauthorized manner. + +[Intrusion Prevention System (IPS)](#ips) +System that can detect an intrusive activity and can also attempt to stop the activity, ideally before it reaches its targets. ## J {#j} @@ -104,7 +116,14 @@ tags: related-tag ## M {#m} -[]() +[MLOps](#mlops) +The selection, application, interpretation, deployment, and maintenance of machine learning models within an AI-enabled system + +[Model](#model) +Detailed description or scaled representation of one component of a larger system that can be created, operated, and analyzed to predict actual operational characteristics of the final produced component + +[Model ensemble](#model_ensemble) +Art of combining a diverse set of learners (individual models) together to improvise on the stability and predictive power of the model ## N {#n} @@ -112,7 +131,11 @@ tags: related-tag ## O {#o} -[]() +[Obfuscation](#obfuscation) +Defense mechanism in which details of the model or training data are kept secret by adding a large amount of valid but useless information to a data store + +[Overfitting](#overfitting) +Overfitting is when a statistical model begins to describe the random error in the data rather than the relationships between variables. This occurs when the model is too complex ## P {#p} @@ -124,11 +147,13 @@ tags: related-tag ## R {#r} -[]() +[Regularisation](#regularisation) +Controlling model complexity by adding information in order to solve ill-posed problems or to prevent overfitting ## S {#s} -[]() +[Spam](#spam) +The abuse of electronic messaging systems to indiscriminately send unsolicited bulk messages ## T {#t} @@ -136,7 +161,8 @@ tags: related-tag ## U {#u} -[]() +[Underfitting](#underfitting) +Underfitting is when a data model is unable to capture the relationship between the input and output variables accurately, generating a high error rate on both the training set and unseen data ## V {#v}