RDK X5 Edge AI Development Board – 8GB LPDDR4 | Wi-Fi 6, CAN-FD, Sunrise 5 SoC
The RDK X5 is a powerful Edge AI development board built around the high-performance Sunrise 5 SoC, delivering up to 10 TOPS for AI inference tasks. Designed for developers, educators, and robotics enthusiasts, the RDK X5 offers a compact yet robust platform for real-time computer vision, smart robotics, and edge computing applications.
With 8GB LPDDR4 memory, Wi-Fi 6, and Bluetooth 5.4, it ensures seamless wireless connectivity, while interfaces like MIPI CSI (stereo camera support), CAN-FD, and UART make it ideal for robot control, autonomous navigation, and industrial IoT solutions.
Key Features
- Powered by Sunrise 5 SoC with up to 10 TOPS AI performance
- 8GB LPDDR4 RAM for smooth edge inference and multitasking
- Dual MIPI CSI interface – ideal for stereo vision and object detection
- Wi-Fi 6 + Bluetooth 5.4 wireless connectivity
- Supports CAN-FD, UART, GPIO, USB 3.0/2.0
- 3.5mm audio jack and microSD card slot
- Multiple power input options: USB-C, header, optional PoE
Technical Specifications
| Specification | Details |
|---|---|
| Processor | Sunrise 5 AI SoC (up to 10 TOPS) |
| Memory | 8GB LPDDR4 |
| Wireless | Wi-Fi 6 + Bluetooth 5.4 |
| Camera Interface | MIPI CSI x2 (stereo vision) |
| Audio | 3.5mm headphone jack |
| Debug | UART serial port |
| Interfaces | CAN-FD, USB 3.0/2.0, GPIO headers |
| Storage | microSD card slot, USB drives |
| Power Supply | USB-C / header pins / optional PoE |
| Board Version | v1.0 (8GB model) |
What's Included
- 1 × RDK X5 Development Board (8GB)
- 1 × Quick Start Guide
Ideal For
- Robotics and drone control systems
- Edge AI applications and inference engines
- Smart vision and stereo camera projects
- IoT development and remote sensing
Compatible with popular frameworks such as TensorFlow Lite, ONNX, and custom AI models.
Compared to RDK X3, RDK X5 offers stronger performance from CPU to BPU, boosting computing power and speeding up AI algorithm execution.
Key Features
A New Way to Upgrade Boards
Automatically acquire the latest RDK OS — a single data cable is all you need for a quick and easy upgrade.
Application Examples
Over 200 open-source algorithms and solutions to accelerate application development.
Covers everything needed for robotics development—from sensors and algorithms to application samples.
Visual Line Tracking Demo
The Visual Line Tracking Demo implements the movement of a racing car within the track based purely on visual methods, with guiding lines assisting the car to stay centered.
Overview
To achieve this functionality, three main modules are needed: visual input, environment perception, and motion control.
- Visual Input Module: Captures images of the real or simulated environment and forwards them to the environment perception module.
- Environment Perception Module: Determines the car’s position on the track and provides data to the motion control module.
- Motion Control Module: Computes motion commands based on position data and sends them to the car for actuation.
These modules can be refined and implemented using NodeHub — Horizon’s “Intelligent Robot Application Center” that offers open-source Nodes for rapid robot development. By connecting different Nodes, you can complete the implementation of these three modules.
Visual Input Module
Under the “Peripheral Adaptation” category in NodeHub, select the “MIPI Camera Driver” Node to implement the visual input module. This Node supports the GC4663 wide-angle camera, providing a larger field of view suitable for racing car line tracking.
Features:
- Supports multiple resolutions: 1920x1080, 640x480.
- Publishes topics:
/hbmem_image,/image_raw,/camera_info. - Different resolutions selectable via
ros2 launchfiles.
For optimal performance, use the configuration file mipi_cam_640x480_nv12_hbmem.launch.py, which publishes 640x480 NV12 images through shared memory.
Deployment Steps:
sudo apt update sudo apt install -y tros-mipi-cam
Run Command:
source /opt/tros/setup.bash ros2 launch mipi_cam mipi_cam_640x480_nv12_hbmem.launch.py
Environment Perception Module
The Track Detection Node (in “Competition Zone”) detects the guiding lines’ position relative to the car using deep learning and publishes the midpoint of the track.
Features:
- Subscribes to
/hbmem_image(hbm_img_msgs/msg/HbmMsg1080P), consistent with the visual input module’s output. - Publishes to
/racing_track_center_detection(geometry_msgs/msg/PointStamped). - Model file path:
/opt/nodehub_model/race_detection/race_track_detection.bin(replaceable).
Deployment Steps:
sudo apt update sudo apt install -y tros-racing-track-detection-resnet
Run Command:
source /opt/tros/setup.bash ros2 launch racing_track_detection_resnet racing_track_detection_resnet.launch.py
For training methods, refer to “Track Detection Model Training and Deployment Complete Explanation.”
Motion Control Module
The Car Line Following Control Node (under “Competition Zone”) receives messages from the Track Detection Node to control the car’s line-following behavior.
Features:
- Subscribes to
/racing_track_center_detection(geometry_msgs/msg/PointStamped). - Publishes control data to
/cmd_vel(geometry_msgs/msg/Twist).
Deployment Steps:
sudo apt update sudo apt install -y tros-racing-control
Run Command:
source /opt/tros/local_setup.bash ros2 launch racing_control racing_control.launch.py \ avoid_angular_ratio:=0.2 avoid_linear_speed:=0.1 \ follow_angular_ratio:=-1.0 follow_linear_speed:=0.1
Chassis Setup
Under “Peripheral Adaptation,” select the Originbot Chassis Driver Node to receive control messages and manage motor movement.
Features:
- Subscribes to
/cmd_vel(geometry_msgs/msg/Twist), consistent with the Motion Control Node.
Deployment Steps:
sudo apt update sudo apt install -y tros-originbot-base tros-serial tros-originbot-msgs
Run Command:
source /opt/tros/setup.bash ros2 launch originbot_base robot.launch.py
At this point, the visual line tracking demo setup is complete. The system integrates visual input, environment perception, and motion control modules to achieve full autonomous track following.
Source: D-Robotics NodeHub · Horizon Robotics · Published 2025-11-12 · License: Apache 2.0
Visual Inertial Odometry (VIO) Demo
VIO integrates camera and IMU data to achieve robot localization. It is low-cost, widely applicable, and can compensate for failures in satellite positioning (e.g., occlusion or multipath interference), enabling high-precision outdoor navigation.
Function Introduction
Code Repository: GitHub Link
VIO subscribes to image and IMU data from the Realsense camera, calculates the camera trajectory, and publishes motion paths via ROS2 topics. Visualization can be done with rviz2 on a PC.
Bill of Materials
| Robot Name | Manufacturer | Reference Link |
|---|---|---|
| RDK X3 | See reference link | Click to jump |
| Realsense | Intel RealSense D435i |
User Instructions
Preparation
- RDK must have Ubuntu installed.
- RDK must be properly installed and powered.
- Connect Realsense camera via USB 3.0.
The VIO algorithm subscribes to image and IMU data, computes camera trajectory, and publishes it via ROS2 topics. Use rviz2 on a PC for visualization.
Hardware Connection
Connect Realsense to RDK as per the diagram (realsense-x3).
1. Install Package
# For tros foxy
sudo apt update
sudo apt install -y tros-hobot-vio
# For tros humble
sudo apt update
sudo apt install -y tros-humble-hobot-vio
2. Run VIO Feature
Use the launch file to start Realsense and VIO:
# tros foxy
source /opt/ros/foxy/setup.bash
source /opt/tros/local_setup.bash
ros2 launch hobot_vio hobot_vio.launch.py
# tros humble
source /opt/tros/humble/local_setup.bash
ros2 launch hobot_vio hobot_vio.launch.py
During initialization, keep the camera stationary, then translate it forward to complete initialization. Afterward, camera movement initiates visual-inertial localization.
3. Viewing Results
Use rviz2 with ROS2 installed on a PC on the same network. Configure subscription topics as per “Interface Explanation.”
Interface Explanation
Input Topics
| Parameter Name | Type | Description | Mandatory | Default Value |
|---|---|---|---|---|
| path_config | std::string | Path to VIO config file | Yes | /opt/tros/${tros_distro}/lib/hobot_vio/config/realsenseD435i.yaml |
| image_topic | std::string | ROS2 image topic | Yes | /camera/infra1/image_rect_raw |
| imu_topic | std::string | ROS2 IMU topic | Yes | /camera/imu |
| sample_gap | std::string | Processing frequency (1=every frame) | Yes | 2 |
Output Topic
| Topic Name | Type | Description |
|---|---|---|
| horizon_vio/horizon_vio_path | nav_msgs::msg::Path | Robot's motion trajectory output |
FAQs & Notes
- If running
ros2commands gives "-bash: ros2: command not found", configure the terminal environment:# tros foxy source /opt/tros/local_setup.bash # tros humble source /opt/tros/humble/local_setup.bash - Install Realsense ROS2 packages:
# ROS2 Foxy example sudo apt-get install ros-foxy-librealsense2* ros-foxy-realsense2-camera ros-foxy-realsense2-description -y # ROS2 Humble example sudo apt-get install ros-humble-librealsense2* ros-humble-realsense2-camera ros-humble-realsense2-description -y - The trajectory is saved automatically in
trans_quat_camera_xx.txtwith columns: timestamp, x, y, z, quaternion w, x, y, z. - Monocular VIO requires initialization. Move the camera smoothly during operation.
Source: D-Robotics Hobot VIO GitHub
Mono2D Body Detection Demo
This demo shows a single RGB human body detection algorithm using the hobot_dnn package on the RDK X3. It detects human body, head, face, hands, and keypoints using a Faster R-CNN model on the BPU processor.
Function Introduction
The demo subscribes to image messages and publishes perception results through hobot_msgs/ai_msgs/msg/PerceptionTargets. Users can subscribe to these AI messages for their applications.
Bill of Materials
| Material Name | Manufacturer | Reference Link |
|---|---|---|
| RDK X3 / RDK Ultra | Multiple Manufacturers | RDK X3 / RDK Ultra |
| Camera | Multiple Manufacturers | MIPI Camera / USB Camera |
Preparation
- RDK comes with Ubuntu 20.04 pre-installed.
- Camera must be properly connected to RDK X3.
Instructions
1. Install Package
# tros foxy
sudo apt update
sudo apt install -y tros-mono2d-body-detection
# tros humble
sudo apt update
sudo apt install -y tros-humble-mono2d-body-detection
2. Run Human Body Detection
Using MIPI Camera:# tros foxy
source /opt/tros/setup.bash
cp -r /opt/tros/${TROS_DISTRO}/lib/mono2d_body_detection/config/ .
export CAM_TYPE=mipi
ros2 launch mono2d_body_detection mono2d_body_detection.launch.py
# tros humble
source /opt/tros/humble/setup.bash
cp -r /opt/tros/${TROS_DISTRO}/lib/mono2d_body_detection/config/ .
export CAM_TYPE=mipi
ros2 launch mono2d_body_detection mono2d_body_detection.launch.py
Using USB Camera:
# tros foxy
source /opt/tros/setup.bash
cp -r /opt/tros/${TROS_DISTRO}/lib/mono2d_body_detection/config/ .
export CAM_TYPE=usb
ros2 launch mono2d_body_detection mono2d_body_detection.launch.py
# tros humble
source /opt/tros/humble/setup.bash
cp -r /opt/tros/${TROS_DISTRO}/lib/mono2d_body_detection/config/ .
export CAM_TYPE=usb
ros2 launch mono2d_body_detection mono2d_body_detection.launch.py
Using Local Replay Images (Humble Only):
cp -r /opt/tros/${TROS_DISTRO}/lib/mono2d_body_detection/config/ .
export CAM_TYPE=fb
ros2 launch mono2d_body_detection mono2d_body_detection.launch.py publish_image_source:=config/person_body.jpg publish_image_format:=jpg publish_output_image_w:=960 publish_output_image_h:=544
3. Checking the Effects
Open a browser on a computer in the same network, visit http://IP:8000 to view real-time detection (replace IP with the RDK’s IP address).
Interface Description
Topics
| Name | Message Type | Description |
|---|---|---|
| /hobot_mono2d_body_detection | hobot_msgs/ai_msgs/msg/PerceptionTargets | Human body recognition results |
| /hbmem_img | hobot_msgs/hbm_img_msgs/msg/HbmMsg1080P | Subscribe to shared memory image data (is_shared_mem_sub == 1) |
| /image_raw | hsensor_msgs/msg/Image | Subscribe to image data via standard ROS (is_shared_mem_sub == 0) |
Parameters
| Parameter Name | Type | Description | Required | Default Value |
|---|---|---|---|---|
| is_sync_mode | int | Synchronous/asynchronous inference mode (0=async,1=sync) | No | 0 |
| model_file_name | std::string | Path to inference model | No | config/multitask_body_head_face_hand_kps_960x544.hbm |
| is_shared_mem_sub | int | Subscribe via shared memory? 0=no,1=yes | No | 1 |
| ai_msg_pub_topic_name | std::string | Topic for publishing perception results | No | /hobot_mono2d_body_detection |
| ros_img_topic_name | std::string | ROS image topic name | No | /image_raw |
| image_gap | int | Frame skip interval (1=every frame) | No | 1 |
Thank You for Your Purchase!
We hope you are happy with your purchase. However, if you are not completely satisfied for any reason, you may return it for a full refund, store credit, or exchange. Please see below for more details on our return policy.
RETURNS
- All returns must be postmarked within seven (7) days of the purchase date.
- Items must be in new and unused condition, with all original tags and labels attached.
RETURN PROCESS
Return Address Request
To obtain the return address, please contact us via email before sending your return. We will provide you with the correct return details.
📧 Email us at: support@meshnology.com
Thank you for your understanding!
📌 Note: Customers are responsible for all return shipping costs. We strongly recommend using a trackable shipping method.
REFUNDS
- After receiving and inspecting your return, we will process your refund or exchange.
- Please allow at least seven (7) days from the receipt of your item for processing.
- Refunds may take 1-2 billing cycles to appear on your credit card statement, depending on your credit card company.
EXCEPTIONS
The following items cannot be returned or exchanged:
- Improper or unreasonable use or maintenance
- Failure to follow operating instructions
- Accidents or excessive moisture damage
- Damage from insects, lightning, or power surges
- Connections to improper voltage supplies
- Unauthorized alterations or modifications
- Damage from inadequate packing or shipping procedures
- Use with other incompatible products
- Products that require modification for use outside their intended country
- Items purchased from unauthorized dealers
For defective or damaged products, please contact us at the email below to arrange a refund or exchange.
QUESTIONS?
If you have any questions about our return policy, feel free to contact us:
Our goal is to offer you the best shipping options, no matter where you live. Every day, we deliver to hundreds of customers across the world, ensuring that we provide the highest levels of responsiveness to you at all times.
Important Shipping Update for US Customers
Due to the rising cost of tariffs, we regret to inform our US customers that additional shipping fees will be applied as follows:
- Orders under $100: $10 additional shipping fee
- Orders between $100 and $200: $20 additional shipping fee
- Orders over $200: $30 additional shipping fee
We appreciate your understanding and continued support. Should you have any questions, feel free to contact us. Thank you!
Shipping Rules:
- Before the order ships, you can cancel it, change the address information, and modify the shipping method. (We cannot change shipping addresses, shipping methods, or cancel your order after it has been shipped.)
- No return service is available once your package arrives at your local destination. If your address is incorrect or you do not pick up the package on time, it will most likely be destroyed locally. (Once your package is shipped, we will update the tracking information for you. Please pay attention to it, and contact us at support@meshnology.com or message us online if you have any issues.)
- For any abnormal orders, we will contact you via email or WhatsApp within 3 working days. If there is no reply within two days, the order will be canceled by default (e.g., due to items being sold out, pricing issues, shipping method restrictions, additional shipping fees, or battery-related restrictions).
- Customers are responsible for paying customs taxes.
How Long Until Your Order Ships?
- In stock: We will ship your order within 1-2 working days.
- Pre-order: Usually takes 2-7 working days, except for certain products.
Shipping Methods & Delivery Times
We currently offer three shipping methods:
- Free Standard Shipping - Your orders will be shipped via Standard Shipping by default (except for remote areas).
- Expedited Shipping - If you prefer faster delivery via DHL, FedEx, UPS, etc., you will need to pay an extra shipping fee.
- Ocean Shipping (USA Only) - Available for shipments to the USA.
Estimated Delivery Times (Standard Shipping):
- USA: 7-18 business days
- Japan: 4-12 business days
- Australia / Europe: 8-20 business days
- Canada: 12-25 business days
- Southeast Asia: 4-12 business days
- Turkey: 7-16 business days
- Russia: 20-50 business days
- Other Countries: 15-20 business days
Please Note:
- These are estimated delivery times; we cannot guarantee an exact delivery date.
- Orders containing batteries may experience longer shipping times.
- We are not responsible for shipping delays beyond our control.
Battery Shipping Restrictions
- We CANNOT ship batteries to the following countries:
Turkey, India, Germany, Brazil, Mexico, Chile, New Zealand, etc.
(If your order contains a battery and we have no shipping method available, we will contact you to change the product or cancel the order.) - We CAN ship batteries to these countries:
USA, Japan, Russia, Austria, Belgium, Czech Republic, Denmark, France, Hungary, Ireland, Italy, Luxembourg, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland, Netherlands.
Customs Clearance & Taxes
- Customers are responsible for customs taxes.
- If customs clearance cannot be completed due to customer-related reasons, we cannot provide a refund.
Warehouses
- US Warehouse – Ships only to US addresses.
- CN Warehouse – Ships internationally.
- If no warehouse option is displayed on the product page, the order will be shipped from the CN warehouse.
For further assistance, please contact us at support@meshnology.com.

