Skip to content

Commit

Permalink
Update README.md (#1278)
Browse files Browse the repository at this point in the history
* Update README.md
@ServiAmirPM 
- Changed required input to optional input for representative dataset
- changed results to the new pytorch mobilenetV2
-  Added results image source
- fixed image source
- Typos, "the" "a" "an", missing panctuation and stuff
- Improved text under contribution
@reuvenperetz 
* Fix broken links in readme and a typo
* Fix broken links in tutorials

---------

Co-authored-by: Reuven <44209964+reuvenperetz@users.noreply.github.com>
Co-authored-by: reuvenp <reuvenp@altair-semi.com>
  • Loading branch information
3 people authored Nov 27, 2024
1 parent 471625e commit 7215538
Show file tree
Hide file tree
Showing 12 changed files with 37 additions and 46 deletions.
57 changes: 24 additions & 33 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,19 +50,19 @@ MCT supports various quantization methods as appears below.

Quantization Method | Complexity | Computational Cost | API | Tutorial
-------------------- | -----------|--------------------|---------|--------
PTQ (Post Training Quantization) | Low | Low (~1-10 CPU minutes) | [PyTorch API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/pytorch_post_training_quantization.html) / [Keras API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/keras_post_training_quantization.html) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_post_training_quantization.ipynb"><img src="https://img.shields.io/badge/Pytorch-green"/></a> <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_post-training_quantization.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
GPTQ (parameters fine-tuning using gradients) | Moderate | Moderate (~1-3 GPU hours) | [PyTorch API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/pytorch_gradient_post_training_quantization.html) / [Keras API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/keras_gradient_post_training_quantization.html) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_mobilenet_gptq.ipynb"><img src="https://img.shields.io/badge/PyTorch-green"/></a> <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_mobilenet_gptq.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
QAT (Quantization Aware Training) | High | High (~12-36 GPU hours) | [QAT API](https://sony.github.io/model_optimization/docs/api/api_docs/index.html#qat) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_qat.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
PTQ (Post Training Quantization) | Low | Low (~1-10 CPU minutes) | [PyTorch API](https://sony.github.io/model_optimization/api/api_docs/methods/pytorch_post_training_quantization.html) / [Keras API](https://sony.github.io/model_optimization/api/api_docs/methods/keras_post_training_quantization.html) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_post_training_quantization.ipynb"><img src="https://img.shields.io/badge/Pytorch-green"/></a> <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_post-training_quantization.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
GPTQ (parameters fine-tuning using gradients) | Moderate | Moderate (~1-3 GPU hours) | [PyTorch API](https://sony.github.io/model_optimization/api/api_docs/methods/pytorch_gradient_post_training_quantization.html) / [Keras API](https://sony.github.io/model_optimization/api/api_docs/methods/keras_gradient_post_training_quantization.html) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_mobilenet_gptq.ipynb"><img src="https://img.shields.io/badge/PyTorch-green"/></a> <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_mobilenet_gptq.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>
QAT (Quantization Aware Training) | High | High (~12-36 GPU hours) | [QAT API](https://sony.github.io/model_optimization/api/api_docs/index.html#qat) | <a href="https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_qat.ipynb"><img src="https://img.shields.io/badge/Keras-green"/></a>

</p>
</div>

For each flow, **Quantization core** utilizes various algorithms and hyper-parameters for optimal [hardware-aware](https://github.com/sony/model_optimization/blob/main/model_compression_toolkit/target_platform_capabilities/README.md) quantization results.
For further details, please see [Supported features and algorithms](#high-level-features-and-techniques).

Required input:
- Floating point model - 32bit model in either .pt or .keras format
- Representative dataset - can be either provided by the user, or generated utilizing the [Data Generation](#data-generation-) capability
**Required input**: Floating point model - 32bit model in either .pt or .keras format
**Optional input**: Representative dataset - can be either provided by the user, or generated utilizing the [Data Generation](#data-generation-) capability

<div align="center">
<p align="center">
Expand Down Expand Up @@ -95,13 +95,13 @@ Generates synthetic images based on the statistics stored in the model's batch n
The specifications of the method are detailed in the paper: _"**Data Generation for Hardware-Friendly Post-Training Quantization**"_ [5].
__________________________________________________________________________________________________________
### Structured Pruning [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_pruning_mnist.ipynb)
Reduces model size/complexity and ensures better channels utilization by removing redundant input channels from layers and reconstruction of layer weights. Read more ([Pytorch API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/pytorch_pruning_experimental.html) / [Keras API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/keras_pruning_experimental.html)).
Reduces model size/complexity and ensures better channels utilization by removing redundant input channels from layers and reconstruction of layer weights. Read more ([Pytorch API](https://sony.github.io/model_optimization/api/api_docs/methods/pytorch_pruning_experimental.html) / [Keras API](https://sony.github.io/model_optimization/api/api_docs/methods/keras_pruning_experimental.html)).
__________________________________________________________________________________________________________
### **Debugging and Visualization**
**🎛️ Network Editor (Modify Quantization Configurations)** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/keras/example_keras_network_editor.ipynb).
Modify your model's quantization configuration for specific layers or apply a custom edit rule (e.g adjust layer's bit-width) using MCT’s network editor
Modify your model's quantization configuration for specific layers or apply a custom edit rule (e.g adjust layer's bit-width) using MCT’s network editor.

**🖥️ Visualization**. Observe useful information for troubleshooting the quantized model's performance using TensorBoard. [Read more](https://sony.github.io/model_optimization/docs/guidelines/visualization.html).
**🖥️ Visualization**. Observe useful information for troubleshooting the quantized model's performance using TensorBoard. [Read more](https://sony.github.io/model_optimization/guidelines/visualization.html).

**🔑 XQuant (Explainable Quantization)** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sony/model_optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_xquant.ipynb). Get valuable insights regarding the quality and success of the quantization process of your model. The report includes histograms and similarity metrics between the original float model and the quantized model in key points of the model. The report can be visualized using TensorBoard.
__________________________________________________________________________________________________________
Expand All @@ -111,15 +111,15 @@ The specifications of the algorithm are detailed in the paper: _"**EPTQ: Enhance
More details on how to use EPTQ via MCT can be found in the [GPTQ guidelines](https://github.com/sony/model_optimization/blob/main/model_compression_toolkit/gptq/README.md).

## <div align="center">Resources</div>
* [User Guide](https://sony.github.io/model_optimization/docs/index.html) contains detailed information about MCT and guides you from installation through optimizing models for your edge AI applications.
* [User Guide](https://sony.github.io/model_optimization/index.html) contains detailed information about MCT and guides you from installation through optimizing models for your edge AI applications.

* MCT's [API Docs](https://sony.github.io/model_optimization/docs/api/api_docs/) is seperated per quantization methods:
* MCT's [API Docs](https://sony.github.io/model_optimization/api/api_docs/) is separated per quantization methods:

* [Post-training quantization](https://sony.github.io/model_optimization/docs/api/api_docs/index.html#ptq) | PTQ API docs
* [Gradient-based post-training quantization](https://sony.github.io/model_optimization/docs/api/api_docs/index.html#gptq) | GPTQ API docs
* [Quantization-aware training](https://sony.github.io/model_optimization/docs/api/api_docs/index.html#qat) | QAT API docs
* [Post-training quantization](https://sony.github.io/model_optimization/api/api_docs/index.html#ptq) | PTQ API docs
* [Gradient-based post-training quantization](https://sony.github.io/model_optimization/api/api_docs/index.html#gptq) | GPTQ API docs
* [Quantization-aware training](https://sony.github.io/model_optimization/api/api_docs/index.html#qat) | QAT API docs

* [Debug](https://sony.github.io/model_optimization/docs/guidelines/visualization.html) – modify optimization process or generate explainable report
* [Debug](https://sony.github.io/model_optimization/guidelines/visualization.html) – modify optimization process or generate an explainable report

* [Release notes](https://github.com/sony/model_optimization/releases)

Expand Down Expand Up @@ -153,25 +153,15 @@ Currently, MCT is being tested on various Python, Pytorch and TensorFlow version
<img src="/docsrc/images/PoseEst.png" width="200">
<img src="/docsrc/images/ObjDet.png" width="200">

### Pytorch
We quantized classification networks from the torchvision library.
In the following table we present the ImageNet validation results for these models:

| Network Name | Float Accuracy | 8Bit Accuracy | Data-Free 8Bit Accuracy |
|---------------------------|-----------------|-----------------|-------------------------|
| MobileNet V2 [3] | 71.886 | 71.444 |71.29|
| ResNet-18 [3] | 69.86 | 69.63 |69.53|
| SqueezeNet 1.1 [3] | 58.128 | 57.678 ||

### Keras
MCT can quantize an existing 32-bit floating-point model to an 8-bit fixed-point (or less) model without compromising accuracy.
Below is a graph of [MobileNetV2](https://keras.io/api/applications/mobilenet/) accuracy on ImageNet vs average bit-width of weights (X-axis), using
single-precision quantization, mixed-precision quantization, and mixed-precision quantization with GPTQ.
Below is a graph of [MobileNetV2](https://pytorch.org/vision/main/models/generated/torchvision.models.mobilenet_v2.html) accuracy on ImageNet vs average bit-width of weights (X-axis), using **single-precision** quantization, **mixed-precision** quantization, and mixed-precision quantization with GPTQ.

<img src="https://github.com/sony/model_optimization/raw/main/docsrc/images/mbv2_accuracy_graph.png">
<p align="center">
<img src="/docsrc/images/torch_mobilenetv2.png" width="800">

For more results, please see [1]


### Pruning Results

Results for applying pruning to reduce the parameters of the following models by 50%:
Expand All @@ -183,19 +173,20 @@ Results for applying pruning to reduce the parameters of the following models by

## <div align="center">Troubleshooting and Community</div>

If you encountered large accuracy degradation with MCT, check out the [Quantization Troubleshooting](https://github.com/sony/model_optimization/tree/main/quantization_troubleshooting.md)
for common pitfalls and some tools to improve quantized model's accuracy.
If you encountered a large accuracy degradation with MCT, check out the [Quantization Troubleshooting](https://github.com/sony/model_optimization/tree/main/quantization_troubleshooting.md)
for common pitfalls and some tools to improve the quantized model's accuracy.

Check out the [FAQ](https://github.com/sony/model_optimization/tree/main/FAQ.md) for common issues.

You are welcome to ask questions and get support on our [issues section](https://github.com/sony/model_optimization/issues) and manage community discussions under [discussions section](https://github.com/sony/model_optimization/discussions).
You are welcome to ask questions and get support on our [issues section](https://github.com/sony/model_optimization/issues) and manage community discussions under the [discussions section](https://github.com/sony/model_optimization/discussions).


## <div align="center">Contributions</div>
MCT aims at keeping a more up-to-date fork and welcomes contributions from anyone.
We'd love your input! MCT would not be possible without help from our community, and welcomes contributions from anyone!

*Checkout our [Contribution guide](https://github.com/sony/model_optimization/blob/main/CONTRIBUTING.md) for more details.

Thank you 🙏 to all our contributors!

## <div align="center">License</div>
MCT is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
Expand Down
Binary file added docsrc/images/torch_mobilenetv2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ in some operator for its weights/activations, fusing patterns, etc.)
## Supported Target Platform Models

Currently, MCT contains three target-platform models
(new models can be created and used by users as demonstrated [here](https://sony.github.io/model_optimization/docs/api/api_docs/modules/target_platform.html#targetplatformmodel-code-example)):
(new models can be created and used by users as demonstrated [here](https://sony.github.io/model_optimization/api/api_docs/modules/target_platform.html#targetplatformmodel-code-example)):
- [IMX500](https://developer.sony.com/develop/imx500/)
- [TFLite](https://www.tensorflow.org/lite/performance/quantization_spec)
- [QNNPACK](https://github.com/pytorch/QNNPACK)
Expand All @@ -27,7 +27,7 @@ One may view the full default target-platform model and its parameters [here](ht

## Usage

The simplest way to initiate a TPC and use it in MCT is by using the function [get_target_platform_capabilities](https://sony.github.io/model_optimization/docs/api/api_docs/methods/get_target_platform_capabilities.html#ug-get-target-platform-capabilities).
The simplest way to initiate a TPC and use it in MCT is by using the function [get_target_platform_capabilities](https://sony.github.io/model_optimization/api/api_docs/methods/get_target_platform_capabilities.html#ug-get-target-platform-capabilities).

For example:

Expand All @@ -50,4 +50,4 @@ quantized_model, quantization_info = mct.ptq.keras_post_training_quantization(Mo

Similarly, you can retrieve IMX500, TFLite and QNNPACK target-platform models for Keras and PyTorch frameworks.

For more information and examples, we highly recommend you to visit our [project website](https://sony.github.io/model_optimization/docs/api/api_docs/modules/target_platform.html#ug-target-platform).
For more information and examples, we highly recommend you to visit our [project website](https://sony.github.io/model_optimization/api/api_docs/modules/target_platform.html#ug-target-platform).
Original file line number Diff line number Diff line change
Expand Up @@ -276,7 +276,7 @@
"cell_type": "markdown",
"source": [
"## Target Platform Capabilities\n",
"MCT optimizes the model for dedicated hardware. This is done using TPC (for more details, please visit our [documentation](https://sony.github.io/model_optimization/docs/api/api_docs/modules/target_platform.html)). Here, we use the default Tensorflow TPC:"
"MCT optimizes the model for dedicated hardware. This is done using TPC (for more details, please visit our [documentation](https://sony.github.io/model_optimization/api/api_docs/modules/target_platform.html)). Here, we use the default Tensorflow TPC:"
],
"metadata": {
"collapsed": false
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@
"cell_type": "markdown",
"source": [
"## Target Platform Capabilities\n",
"MCT optimizes the model for dedicated hardware. This is done using TPC (for more details, please visit our [documentation](https://sony.github.io/model_optimization/docs/api/api_docs/modules/target_platform.html)). Here, we use the default Tensorflow TPC:"
"MCT optimizes the model for dedicated hardware. This is done using TPC (for more details, please visit our [documentation](https://sony.github.io/model_optimization/api/api_docs/modules/target_platform.html)). Here, we use the default Tensorflow TPC:"
],
"metadata": {
"collapsed": false
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -243,7 +243,7 @@
"source": [
"## Target Platform Capabilities (TPC)\n",
"In addition, MCT optimizes models for dedicated hardware platforms using Target Platform Capabilities (TPC). \n",
"**Note:** To apply mixed-precision quantization to specific layers, the TPC must define different bit-width options for those layers. For more details, please refer to our [documentation](https://sony.github.io/model_optimization/docs/api/api_docs/modules/target_platform.html). In this example, we use the default Tensorflow TPC, which supports 2, 4, and 8-bit options for convolution and linear layers"
"**Note:** To apply mixed-precision quantization to specific layers, the TPC must define different bit-width options for those layers. For more details, please refer to our [documentation](https://sony.github.io/model_optimization/api/api_docs/modules/target_platform.html). In this example, we use the default Tensorflow TPC, which supports 2, 4, and 8-bit options for convolution and linear layers"
],
"metadata": {
"collapsed": false
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -237,7 +237,7 @@
"cell_type": "markdown",
"source": [
"## Target Platform Capabilities\n",
"MCT optimizes the model for dedicated hardware. This is done using TPC (for more details, please visit our [documentation](https://sony.github.io/model_optimization/docs/api/api_docs/modules/target_platform.html)). Here, we use the default Tensorflow TPC:"
"MCT optimizes the model for dedicated hardware. This is done using TPC (for more details, please visit our [documentation](https://sony.github.io/model_optimization/api/api_docs/modules/target_platform.html)). Here, we use the default Tensorflow TPC:"
],
"metadata": {
"collapsed": false
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,7 @@
"## MCT Structured Pruning\n",
"\n",
"### Target Platform Capabilities (TPC)\n",
"MCT optimizes models for dedicated hardware using Target Platform Capabilities (TPC). For more details, please refer to our [documentation](https://sony.github.io/model_optimization/docs/api/api_docs/modules/target_platform.html)). First, we'll configure the TPC to define each layer's SIMD (Single Instruction, Multiple Data) size.\n",
"MCT optimizes models for dedicated hardware using Target Platform Capabilities (TPC). For more details, please refer to our [documentation](https://sony.github.io/model_optimization/api/api_docs/modules/target_platform.html)). First, we'll configure the TPC to define each layer's SIMD (Single Instruction, Multiple Data) size.\n",
"\n",
"In MCT, SIMD plays a key role in channel grouping, influencing the pruning process by considering channel importance within each SIMD group.\n",
"\n",
Expand Down
Loading

0 comments on commit 7215538

Please sign in to comment.