Responsible usage of Machine Learning (ML) systems in practice requires to enforce not only high prediction quality, but also to account for other constraints, such as fairness, privacy, or execution time. A simple way to address multiple user-specified constraints on ML systems is feature selection. Yet, applying feature selection to enforce user-specified constraints is challenging. Optimizing feature selection strategies with respect to multiple metrics is difficult to implement and has been underrepresented in previous experimental studies. Here, we propose Declarative Feature Selection (DFS) to simplify the design and validation of ML systems satisfying diverse user-specified constraints. We benchmark and evaluate a representative series of feature selection algorithms. From our extensive experimental results across 16 feature selection strategies, 19 datasets, and 3 classification models, we derive concrete suggestions on when to use which strategy and show that a meta-learning-driven optimizer can accurately predict the right strategy for an ML task at hand. These results demonstrate that feature selection can help to build ML systems that meet combinations of user-specified constraints, independent of the ML methods used. We believe that our empirical results and the proposed declarative feature selection will enable scientists and practitioners to better automate the design and validation of robust and trustworthy ML systems.
For further details, please refer to the paper - and of course if any of this code was helpful for your research, please consider citing it:
@inproceedings{Neutatz21,
author = {Felix Neutatz and
Felix Biessmann and
Ziawasch Abedjan},
title = {Enforcing Constraints for Machine Learning Systems via Declarative Feature Selection: An Experimental Study},
booktitle = {SIGMOD},
year = {2021}
}
To run the experiments, first, you need to set the paths in a configuration file with the name of your machine. Examples can be found here: ~/new_project/fastsklearnfeature/configuration/resources
We provide a small jupyter notebook as an example: open in nbviewer / open in Github
We provide the datasets in an archive.
conda create -n myenv python=3.7
conda activate myenv
git clone https://github.com/jundongl/scikit-feature.git
cd scikit-feature
python setup.py install
cd ..
git clone https://github.com/FelixNeutatz/adversarial-robustness-toolbox.git
cd adversarial-robustness-toolbox
git checkout felix_version
python -m pip install .
cd ..
git clone https://github.com/BigDaMa/DFS.git
cd DFS/new_project
python -m pip install .
In addition to the charts provided in the paper, we provide additional evaluations:
Additional results for Reusability: We provide additional results for reusing features that were discovered for Logistic Regression (LR) for other models, such as SVM, Naive Bayes, and decision trees.
Pareto-Optimal Results for the Test Set: We provide for all 19 datasets all pareto-optimal solution that declarative feature selection found in our benchmark. Here is an example for such an Pareto front: