Please install and setup AIMET before proceeding further.
This model was tested with the torch_gpu
variant of AIMET 1.24.0.
- Clone the RangeNet++ repo
git clone https://github.com/PRBonn/lidar-bonnetal.git
- Apply patches to darknet.py in the above repo using the command below:
patch /path/to/lidar-bonnetal/train/backbones/darknet.py /path/to/aimet-model-zoo/aimet_zoo_torch/rangenet/train/models/backbones/darknet.patch
path /path/to/lidar-bonnetal/train/tasks/semantic/decoders/darknet.py /path/to/aimet-model-zoo/aimet_zoo_torch/rangenet/train/tasks/semantic/decoders/darknet.patch
These changes are needed in order to meet prepare_model's requirements
-
Create a new folder to put your downloaded dataset
-
Create a new folder to put your downloaded original/optimized model
-
Add the "models/train/tasks/semantic/evaluate.py" file to your "models/train/tasks/semantic" path
-
Add AIMET Model Zoo to the python path
export PYTHONPATH=$PYTHONPATH:<aimet_model_zoo_path>
The Semantic kitti Dataset can be downloaded from here:
The folder structure and format of Semantic kitti dataset is like below:
--dataset
--sequences
--00
--velodyne
--000000.bin
--000001.bin
--labels
--000000.label
--000001.label
--poses.txt
- The original prepared RangeNet++ checkpoint can be downloaded from here:
- Optimized checkpoint can be downloaded from the:
- The Quantization Simulation (Quantsim) Configuration file can be downloaded from here: default_config_per_channel.json (Please see this page for more information on this file).
To run evaluation with QuantSim in AIMET, use the following
python rangenet++_quanteval.py \
--dataset-path <The path to the dataset, default is '../models/train/tasks/semantic/dataset/'>
--model-orig-path <The path to the model_orig, default is '../models/train/tasks/semantic/pre_trained_model'>
--model-optim-path <The path to the model_optim, default is '../models/train/tasks/semantic/quantized_model'>
--use-cuda <Use cuda or cpu, default is True> \
--batch-size <Number of images per batch, default is 1>
- Weight quantization: 8 bits, per channel symmetric quantization
- Bias parameters are not quantized
- Activation quantization: 8 bits, asymmetric quantization
- Model inputs are quantized
- Percentile was used as quantization scheme, and the value was set to 99.99
- Bn fold and Adaround have been applied on optimized checkpoint
Below are the mIoU results of the PyTorch rangeNet++ model for the semantic kitti dataset:
Model Configuration | FP32 (%) | W8A8 (%) |
---|---|---|
rangeNet_plus_FP32 | 47.2 | 46.8 |
rangeNet_plus_W8A8_checkpoint | - | 47.0 |