-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA, TensorRT, CUDNN upgrade #5607
Comments
Hi @amadeuszsz, I think I have a related question. I have the following error, when I try to run a Carla Autoware example inside the [component_container_mt-61] [ERROR] [1735397635.819246109] [perception.traffic_light_recognition.traffic_light.classification.car_traffic_light_classifier]: please install CUDA, CUDNN and TensorRT to use cnn classifier I run this example ros2 launch autoware_launch e2e_simulator.launch.xml map_path:=/tmp/autoware_map/Town01 \
vehicle_model:=sample_vehicle sensor_model:=awsim_sensor_kit simulator_type:=carla carla_map:=Town01 and my docker is $ docker inspect ghcr.io/autowarefoundation/autoware:universe-cuda | grep Created
"Created": "2024-12-19T15:34:09.760888018Z", what components shall be installed, it looks like $ docker run -ti --runtime nvidia --gpus all ghcr.io/autowarefoundation/autoware:universe-cuda nvidia-smi
Sat Dec 28 22:12:26 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.05 Driver Version: 560.35.05 CUDA Version: 12.6 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce GTX 1080 Ti Off | 00000000:01:00.0 Off | N/A |
| 0% 27C P8 9W / 250W | 1038MiB / 11264MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
+-----------------------------------------------------------------------------------------+ it looks like it is built without |
@razr your image contains TensorRT, you can check it by ldd /opt/autoware/lib/libtraffic_light_classifier_nodelet.so | grep libnv
libnvinfer.so.8 => /usr/lib/x86_64-linux-gnu/libnvinfer.so.8 (0x00007808ea97c000)
libnvinfer_plugin.so.8 => /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8 (0x00007808e8500000)
libnvonnxparser.so.8 => /usr/lib/x86_64-linux-gnu/libnvonnxparser.so.8 (0x00007808e8000000) or if you use locally built workspace: ldd /workspace/install/autoware_traffic_light_classifier/lib/libtraffic_light_classifier_nodelet.so | grep libnv
libnvinfer.so.8 => /usr/lib/x86_64-linux-gnu/libnvinfer.so.8 (0x000074245a897000)
libnvinfer_plugin.so.8 => /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8 (0x000074245841b000)
libnvonnxparser.so.8 => /usr/lib/x86_64-linux-gnu/libnvonnxparser.so.8 (0x0000742457e00000) If you use locally build workspace, I suggest to make a clean build and make sure you use CUDA-supported image. Please create a separate thread if you issue still occurs. |
Checklist
Description
Purpose
Incoming new packages requires TensorRT upgrade.
Possible approaches
autoware.repos
temporary.Definition of done
tensorrt_cmake_module
.autoware_tensorrt_common
.autoware_image_projection_based_fusion
.autoware_lidar_apollo_instance_segmentation
.autoware_lidar_centerpoint
.autoware_lidar_transfusion
.autoware_shape_estimation
.autoware_tensorrt_classifier
.autoware_tensorrt_yolox
.autoware_traffic_light_classifier
.autoware_traffic_light_fine_detector
.autoware_tensorrt_rtmdet
.The text was updated successfully, but these errors were encountered: