Skip to content

The repository containing the code and data for the paper "Training robust and generalizable quantum models"

Notifications You must be signed in to change notification settings

daniel-fink-de/training-robust-and-generalizable-quantum-models

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Training robust and generalizable quantum models

This repository contains the source code, data and plots for the paper "Training robust and generalizable quantum models" by Julian Berberich, Daniel Fink, Daniel Pranjić, Christian Tutschku and Christian Holm [arXiv].

Installation

This repository uses poetry for dependency management. In order to run the numerical simulations, poetry must be installed first. Afterwards, run

poetry install

from the root directory of the repository to setup the Python virtual environment.

Overview

The repository is contains the following directories:

  • circle1: contains all scripts for creating and visualizing the numerical simulations.
  • data: directory for storing the numerical results as raw data.
  • plots: contains all the plots as PDFs.

Running Numerical Simulations

The simulations can be distinguished into trainable and fixed (or non-trainable) encodings for quantum learning models. In particular, a sperate training of the two followed by numerical simulations to obtain robustness and generalization performances are performed. Lastly, the results are plotted into a common plot for the generalization and robustness, respectively.

Training

The training can be performed via running

poetry run training_trainable

or

poetry run training_non_trainable

respectively, from the root of the repository.

Note: The training is parallelised using Dask, but it can still take a long time to finish.

The output of the training is stored in the data directory.

Generalization Simulations

Each of the two models can be used to perform simulations to obtain the corresponding generalization performance. To do so, run

poetry run analyse_generalization_trainable

or

poetry run analyse_generalization_non_trainable

respectively.

Note: The generalization simulations are parallelised with Dask as well and can take up to several minutes.

The output of the generalization simulations are stored in the data directory.

Robustness Simulations

Each of the two models can also be used to perform simulations to obtain its robustness performance. Therefore, run

poetry run analyse_robustness_trainable

or

poetry run analyse_robustness_non_trainable

respectively.

Note: The robustness simulations are parallelised with Dask as well and can take up to several minutes.

The output of the robustness simulations are stored in the data directory.

Visualizing the Results

The following commands can be run to visualize the results:

poetry run plot_circuits - To visualize the considered quantum circuits of the QML models.

poetry run plot_generalization_trainable - To visualize the generalization performance of the trainable encoding.

poetry run plot_generalization_non_trainable - To visualize the generalization performance of the fixed encoding.

poetry run plot_robustness - To create a common robustness plot for both QML models.

poetry run plot_predictions - To create a visualization of the considered classification problem: the circle classification problem.

About

The repository containing the code and data for the paper "Training robust and generalizable quantum models"

Resources

Stars

Watchers

Forks

Languages