We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug
pip (model-compression-toolkit)
2.2.0
google colab
3.10.12
imx500_notebooks/pytorch/pytorch_yolov8n_seg_for_imx500.ipynb
from tutorials.mct_model_garden.evaluation_metrics.coco_evaluation import evaluate_yolov8_segmentation evaluate_yolov8_segmentation(quant_model, seg_model_predict, data_dir='coco', data_type='val2017', img_ids_limit=100, output_file='results_quant.json', iou_thresh=0.7, conf=0.001, max_dets=300,mask_thresh=0.55) loading annotations into memory... Done (t=0.44s) creating index... index created! Processing Images: 0%| | 0/100 [00:00<?, ?it/s] --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) [<ipython-input-13-837fbbfbddac>](https://localhost:8080/#) in <cell line: 2>() 1 from tutorials.mct_model_garden.evaluation_metrics.coco_evaluation import evaluate_yolov8_segmentation ----> 2 evaluate_yolov8_segmentation(quant_model, seg_model_predict, data_dir='coco', data_type='val2017', img_ids_limit=100, output_file='results_quant.json', iou_thresh=0.7, conf=0.001, max_dets=300,mask_thresh=0.55) 12 frames [/content/tutorials/mct_model_garden/evaluation_metrics/coco_evaluation.py](https://localhost:8080/#) in evaluate_yolov8_segmentation(model, model_predict_func, data_dir, data_type, img_ids_limit, output_file, iou_thresh, conf, max_dets, mask_thresh) 539 540 # Run the model --> 541 output = model_predict_func(model, input_img) 542 543 #run post processing (nms) [/content/tutorials/mct_model_garden/models_pytorch/yolov8/yolov8.py](https://localhost:8080/#) in seg_model_predict(model, inputs) 540 # Run the model 541 with torch.no_grad(): --> 542 outputs = model(input_tensor) 543 544 return outputs [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs) 1551 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] 1552 else: -> 1553 return self._call_impl(*args, **kwargs) 1554 1555 def _call_impl(self, *args, **kwargs): [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs) 1560 or _global_backward_pre_hooks or _global_backward_hooks 1561 or _global_forward_hooks or _global_forward_pre_hooks): -> 1562 return forward_call(*args, **kwargs) 1563 1564 try: [/usr/local/lib/python3.10/dist-packages/model_compression_toolkit/core/pytorch/back2framework/pytorch_model_builder.py](https://localhost:8080/#) in forward(self, *args) 315 316 # Run node operation and fetch outputs --> 317 out_tensors_of_n, out_tensors_of_n_float = _run_operation(node, 318 input_tensors, 319 op_func=op_func, [/usr/local/lib/python3.10/dist-packages/model_compression_toolkit/core/pytorch/back2framework/pytorch_model_builder.py](https://localhost:8080/#) in _run_operation(n, input_tensors, op_func, quantize_node_activation_fn, use_activation_quantization) 144 merged_inputs, functional_kwargs = _merge_inputs(n, input_tensors, op_call_args, functional_kwargs.copy(), 145 tensor_input_allocs=_tensor_input_allocs) --> 146 out_tensors_of_n_float = op_func(*merged_inputs, **functional_kwargs) 147 148 # Add a fake quant node if the node has an activation threshold. [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs) 1551 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] 1552 else: -> 1553 return self._call_impl(*args, **kwargs) 1554 1555 def _call_impl(self, *args, **kwargs): [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs) 1560 or _global_backward_pre_hooks or _global_backward_hooks 1561 or _global_forward_hooks or _global_forward_pre_hooks): -> 1562 return forward_call(*args, **kwargs) 1563 1564 try: [/usr/local/lib/python3.10/dist-packages/mct_quantizers/pytorch/quantize_wrapper.py](https://localhost:8080/#) in forward(self, *args, **kwargs) 254 outputs = self.layer(args, *self.op_call_args, **_kwargs) 255 else: --> 256 outputs = self.layer(*args, *self.op_call_args, **_kwargs) 257 258 return outputs [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs) 1551 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] 1552 else: -> 1553 return self._call_impl(*args, **kwargs) 1554 1555 def _call_impl(self, *args, **kwargs): [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs) 1560 or _global_backward_pre_hooks or _global_backward_hooks 1561 or _global_forward_hooks or _global_forward_pre_hooks): -> 1562 return forward_call(*args, **kwargs) 1563 1564 try: [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py](https://localhost:8080/#) in forward(self, input) 456 457 def forward(self, input: Tensor) -> Tensor: --> 458 return self._conv_forward(input, self.weight, self.bias) 459 460 class Conv3d(_ConvNd): [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py](https://localhost:8080/#) in _conv_forward(self, input, weight, bias) 452 weight, bias, self.stride, 453 _pair(0), self.dilation, self.groups) --> 454 return F.conv2d(input, weight, bias, self.stride, 455 self.padding, self.dilation, self.groups) 456 RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
No response
see above
The text was updated successfully, but these errors were encountered:
Thanks Alex, should be resolved with #1228.
Sorry, something went wrong.
segmentation tutorial fix (#1228)
c90c5f2
resolve issue #1222 fix gpu execusion on tutorial
pr merged
samuel-wj-chapman
Idan-BenAmi
No branches or pull requests
Issue Type
Bug
Source
pip (model-compression-toolkit)
MCT Version
2.2.0
OS Platform and Distribution
google colab
Python version
3.10.12
Describe the issue
imx500_notebooks/pytorch/pytorch_yolov8n_seg_for_imx500.ipynb
Expected behaviour
No response
Code to reproduce the issue
Log output
No response
The text was updated successfully, but these errors were encountered: