Skip to content

Latest commit

 

History

History
executable file
·
98 lines (82 loc) · 3.76 KB

File metadata and controls

executable file
·
98 lines (82 loc) · 3.76 KB

PyTorch RangeNet++

Setup AI Model Efficiency Toolkit

Please install and setup AIMET before proceeding further. This model was tested with the torch_gpu variant of AIMET 1.24.0.

Model modifications & Experiment Setup

  1. Clone the RangeNet++ repo
git clone https://github.com/PRBonn/lidar-bonnetal.git
  1. Apply patches to darknet.py in the above repo using the command below:
patch /path/to/lidar-bonnetal/train/backbones/darknet.py /path/to/aimet-model-zoo/aimet_zoo_torch/rangenet/train/models/backbones/darknet.patch

path /path/to/lidar-bonnetal/train/tasks/semantic/decoders/darknet.py /path/to/aimet-model-zoo/aimet_zoo_torch/rangenet/train/tasks/semantic/decoders/darknet.patch

These changes are needed in order to meet prepare_model's requirements

  1. Create a new folder to put your downloaded dataset

  2. Create a new folder to put your downloaded original/optimized model

  3. Add the "models/train/tasks/semantic/evaluate.py" file to your "models/train/tasks/semantic" path

  4. Add AIMET Model Zoo to the python path

export PYTHONPATH=$PYTHONPATH:<aimet_model_zoo_path>

Dataset

The Semantic kitti Dataset can be downloaded from here:

The folder structure and format of Semantic kitti dataset is like below:

--dataset
	--sequences
		--00
			--velodyne
				--000000.bin
				--000001.bin
			--labels
				--000000.label
				--000001.label
			--poses.txt

Model checkpoint and configuration

Usage

To run evaluation with QuantSim in AIMET, use the following

python rangenet++_quanteval.py \
		--dataset-path <The path to the dataset, default is '../models/train/tasks/semantic/dataset/'>
		--model-orig-path <The path to the model_orig, default is '../models/train/tasks/semantic/pre_trained_model'>
		--model-optim-path <The path to the model_optim, default is '../models/train/tasks/semantic/quantized_model'>
		--use-cuda <Use cuda or cpu, default is True> \
		--batch-size <Number of images per batch, default is 1>

Quantization Configuration (W8A8)

  • Weight quantization: 8 bits, per channel symmetric quantization
  • Bias parameters are not quantized
  • Activation quantization: 8 bits, asymmetric quantization
  • Model inputs are quantized
  • Percentile was used as quantization scheme, and the value was set to 99.99
  • Bn fold and Adaround have been applied on optimized checkpoint

Results

Below are the mIoU results of the PyTorch rangeNet++ model for the semantic kitti dataset:

Model Configuration FP32 (%) W8A8 (%)
rangeNet_plus_FP32 47.2 46.8
rangeNet_plus_W8A8_checkpoint - 47.0