Ensure the following:
- Complete the Getting Started instructions provided by Renesas.
- The Kakip board is set up, including the preparation of the SD card.
- The
rzv2h_ai_sdk_image
Docker container is running on the host machine.
For additional information on the Kakip board, please refer to:
- Website: Kakip Board Official Website
- GitHub Repository: Kakip Board GitHub Repository
Refer to the image below for a visual guide to setting up the Kakip board for development:
This application is used to count the human heads present in a video from Image/camera input.
This repository contains the 11_Head_count_topview
application. Below are the detailed steps for setting up, applying patches, running the application, and building it.
Visit the Renesas RZ/V AI SDK Getting Started Guide for the setup.
It is recommended to download/clone the repository on the data
folder which is mounted on the rzv2h_ai_sdk_container
docker container as shown below.
cd <path_to_data_folder_on_host>/data
git clone https://github.com/Ignitarium-Renesas/rzv_ai_apps.git
git clone https://github.com/Ignitarium-Renesas/Kakip_RZV2H_Demos.git
Note 1: Please verify the git repository url if error occurs.
Note 2: This command will download whole repository, which include all other applications.
If you have already downloaded the repository of the same version, you may not need to run this command.
Run (or start) the docker container and open the bash terminal on the container.
Here, we use the rzv2h_ai_sdk_container
as the name of container, created from rzv2h_ai_sdk_image
docker image.
> Note that all the build steps/commands listed below are executed on the docker container bash terminal.
Set your clone directory to the environment variable.
export PROJECT_PATH=/drp-ai_tvm/data
Apply the provided patch file to the application using the command below:
cd ${PROJECT_PATH}/rzv_ai_apps/11_Head_count_topview/
git apply ../../Kakip_RZV2H_Demos/Head_count_topview/Head_count_topview.patch
Move to the source code directory of the application:
cd ${PROJECT_PATH}/rzv_ai_apps/11_Head_count_topview/src
Build the application by following the commands below.
mkdir -p build && cd build
cmake -DCMAKE_TOOLCHAIN_FILE=./toolchain/runtime.cmake -DV2H=ON ..
make -j$(nproc)
The built application file will be available in the following directory:
${PROJECT_PATH}/11_Head_count_topview/src/build
The generated file will be named:
head_count_topview_app
If you prefer to skip the compilation process, you can directly use the precompiled object file included in this repository. This step is useful if you want to quickly run the program without setting up the build environment.
cp -r ../../Kakip_RZV2H_Demos/Head_count_topview/head_count_topview_app exe_v2h
For the ease of deployment all the deployables file and folders are provided on the exe_v2h folder.
File | Details |
---|---|
topview_head_count_yolov3 | Model object files for deployment. |
head_count_topview_app | application file. |
- Follow the steps below to deploy the project on the board.
- Run the commands below to download the
11_Head_count_topview_deploy_tvm-v230.so
from Release v5.00
cd ${PROJECT_PATH}/11_Head_count_topview/exe_v2h/topview_head_count_yolov3 wget https://github.com/Ignitarium-Renesas/rzv_ai_apps/releases/download/v5.00/11_Head_count_topview_deploy_tvm-v230.so
- Rename the
11_Head_count_topview_deploy_tvm-v230.so
todeploy.so
.
mv 11_Head_count_topview_deploy_tvm-v230.so deploy.so
- Copy the following files to the
/home/root/tvm
directory of the rootfs (SD Card) for the board.- All files in exe_v2h directory. (Including
deploy.so
file.) 11_Head_count_topview
application file if you generated the file by following all the steps.
- All files in exe_v2h directory. (Including
- Run the commands below to download the
To run the application on the Kakip board:
On the board terminal, execute the application with the following command, specifying either USB camera or IMAGE input mode:
-
Image Input:
./head_count_topview_app IMAGE <path_to_the_image>
-
USB Camera Input:
./head_count_topview_app USB
This application detects animals using a YOLOv3 model and classifies them based on input from a USB camera or an image file.
- Set up the environment, clone repositories, and configure the Docker container.
- Ensure that the
PROJECT_PATH
is set:
export PROJECT_PATH=/drp-ai_tvm/data
Apply the provided patch file to the application using the command below:
cd ${PROJECT_PATH}/rzv_ai_apps/07_Animal_detection/
git apply ../../Kakip_RZV2H_Demos/Animal_detection/animal_detection.patch
Move to the source code directory of the application:
cd ${PROJECT_PATH}/rzv_ai_apps/07_Animal_detection/src
The built application file will be available in the following directory:
${PROJECT_PATH}/07_Animal_detection/src/build
The generated file will be named:
animal_detection_app
If you prefer to skip the compilation process, you can directly use the precompiled object file included in this repository. This step is useful if you want to quickly run the program without setting up the build environment.
cp -r ../../Kakip_RZV2H_Demos/Animal_detection/animal_detection_app exe_v2h
For the ease of deployment all the deployables file and folders are provided on the exe_v2h folder.
File | Details |
---|---|
animal_yolov3_onnx | Model object files for deployment. |
animal_detection_app | application file. |
- Follow the steps below to deploy the project on the board.
- Run the commands below to download the
07_Animal_detection_deploy_tvm-v230.so
from Release v5.00
cd ${PROJECT_PATH}/07_Animal_detection/exe_v2h/animal_yolov3_onnx wget https://github.com/Ignitarium-Renesas/rzv_ai_apps/releases/download/v5.00/07_Animal_detection_deploy_tvm-v230.so
- Rename the
07_Animal_detection_deploy_tvm-v230.so
todeploy.so
.
mv 07_Animal_detection_deploy_tvm-v230.so deploy.so
- Copy the following files to the
/home/root/tvm
directory of the rootfs (SD Card) for the board.- All files in exe_v2h directory. (Including
deploy.so
file.) 07_Animal_detection
application file if you generated the file by following all the steps.
- All files in exe_v2h directory. (Including
- Run the commands below to download the
To run the application on the Kakip board:
On the board terminal, execute the application with the following command, specifying either USB camera or IMAGE input mode:
-
Image Input:
./animal_detection_app IMAGE <path_to_the_image>
-
USB Camera Input:
./animal_detection_app USB
This application is used to detect 10 types of vehicles below from camera input.
Also it can be used for these vehicles at 360 angle with multi cameras.
- Car, policecar, ambulance, bicycle, bus, truck, bike, tractor , auto and fire engine
- Set up the environment, clone repositories, and configure the Docker container.
- Ensure that the
PROJECT_PATH
is set:
export PROJECT_PATH=/drp-ai_tvm/data
Apply the provided patch file to the application using the command below:
cd ${PROJECT_PATH}/rzv_ai_apps/14_Multi_camera_vehicle_detection/
git apply ../../Kakip_RZV2H_Demos/multi_camera_vehicle_detection/multi_camera.patch
Move to the source code directory of the application:
cd ${PROJECT_PATH}/rzv_ai_apps/14_Multi_camera_vehicle_detection/src
The built application file will be available in the following directory:
${PROJECT_PATH}/14_Multi_camera_vehicle_detection/src/build
The generated file will be named:
multi_camera_vehicle_detection_app
If you prefer to skip the compilation process, you can directly use the precompiled object file included in this repository. This step is useful if you want to quickly run the program without setting up the build environment.
cp -r ../../Kakip_RZV2H_Demos/multi_camera_vehicle_detection/multi_camera_vehicle_detection_app exe_v2h
For the ease of deployment all the deployables file and folders are provided on the exe_v2h folder.
File | Details |
---|---|
Multi_camera_vehicle_detection_tinyyolov3 | Model object files for deployment. |
multi_camera_vehicle_detection_app | application file. |
- Follow the steps below to deploy the project on the board.
-
Verify the presence of
deploy.so
file in${PROJECT_PATH}/14_Multi_camera_vehicle_detection/exe_v2h/Multi_camera_vehicle_detection_tinyyolov3
-
Copy the following files to the
/home/root/tvm
directory of the rootfs (SD Card) for the board.- All files in exe_v2h directory. (Including
deploy.so
file.) 14_Multi_camera_vehicle_detection
application file if you generated the file by following all the steps.
- All files in exe_v2h directory. (Including
-
To run the application on the Kakip board:
On the board terminal, execute the application with the following command, specifying the number cameras as the 2nd argument and use 'FLIP' as the third argument if you want to flip the output.
- Application with USB Camera Input:
./multi_camera_vehicle_detection_app USB 2
- Application with USB Camera Input with flip mode:
./multi_camera_vehicle_detection_app USB 2 FLIP
This application is not covered with MIT license. This application is licensed with Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license. Please have a look at dos and dont's here : Creative commons website link Hand gesture model's reference : Dataset link
User can : Share — copy and redistribute the material in any medium or format Adapt — remix, transform, and build upon the material for any purpose, even commercially.
Under these terms: Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original. No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
This application showcases the capability of deep neural networks to predict different hand gestures. It detect total of 8 Gestures that includes one, two, three, four, five, thumbs up, thumbs down and rock in the hand with the highest precision.
- Set up the environment, clone repositories, and configure the Docker container.
- Ensure that the
PROJECT_PATH
is set:
export PROJECT_PATH=/drp-ai_tvm/data
Apply the provided patch file to the application using the command below:
cd ${PROJECT_PATH}/rzv_ai_apps/12_Hand_gesture_recognition_v2/
git apply ../../Kakip_RZV2H_Demos/Hand_gesture/hand_gesture.patch
Move to the source code directory of the application:
cd ${PROJECT_PATH}/rzv_ai_apps/12_Hand_gesture_recognition_v2/src
The built application file will be available in the following directory:
${PROJECT_PATH}/12_Hand_gesture_recognition_v2/src/build
The generated file will be named:
hand_gesture_recognition_v2_app
If you prefer to skip the compilation process, you can directly use the precompiled object file included in this repository. This step is useful if you want to quickly run the program without setting up the build environment.
cp -r ../../Kakip_RZV2H_Demos/Hand_gesture/hand_gesture_recognition_v2_app exe_v2h
For the ease of deployment all the deployables file and folders are provided on the exe_v2h folder.
File | Details |
---|---|
hand_yolov3_onnx | Model object files for deployment. |
hand_gesture_recognition_v2_app | application file. |
-
Follow the steps below to deploy the project on the board.
- Run the commands below to download the
12_Hand_gesture_recognition_v2_deploy_tvm-v230.so
from Release v5.00
cd ${PROJECT_PATH}/12_Hand_gesture_recognition_v2/exe_v2h/hand_yolov3_onnx wget https://github.com/Ignitarium-Renesas/rzv_ai_apps/releases/download/v5.00/12_Hand_gesture_recognition_v2_deploy_tvm-v230.so
- Rename the
12_Hand_gesture_recognition_v2_deploy_tvm-v230.so
todeploy.so
.
mv 12_Hand_gesture_recognition_v2_deploy_tvm-v230.so deploy.so
- Verify the presence of
deploy.so
file in ${PROJECT_PATH}/12_Hand_gesture_recognition_v2/exe_v2h/hand_yolov3_onnx - Copy the following files to the
/home/root/tvm
directory of the rootfs (SD Card) for the board.- All files in exe_v2h directory. (Including
deploy.so
file.) 12_Hand_gesture_recognition_v2
application file if you generated the file by following all the steps.
- All files in exe_v2h directory. (Including
- Run the commands below to download the
To run the application on the Kakip board:
On the board terminal, execute the application with the following command, specifying either USB camera or IMAGE input mode:
-
Image Input:
./hand_gesture_recognition_v2_app IMAGE <path_to_the_image>
-
USB Camera Input:
./hand_gesture_recognition_v2_app USB