Cardinal Perception is CSM's perception package used as a base for all advanced robot autonomy. It is structured as a multi-stage pipeline, consisting of odometry, fiducial-based localization, mapping, traversiblity estimation, and path-planning components.
This diagram is slightly out of date!
See the architecture documentation for more information on individual pipeline stages.
-
Install ROS2 if necessary
-
Use rosdep to install ROS package dependencies
- Initialize rosdep if necessary:
sudo rosdep init
- Update and install:
rosdep update rosdep install --ignore-src --from-paths . -r -y
- Initialize rosdep if necessary:
-
Install apt dependencies (should have already been resolved by rosdep)
sudo apt update sudo apt-get install libpcl-dev libopencv-dev
-
Build with colcon
colcon build --symlink-install <--executor parallel> <--event-handlers console_direct+> <--cmake-args=-DCMAKE_EXPORT_COMPILE_COMMANDS:BOOL=ON> source install/setup.bash
Additionally, there are various compile-time configurations which are exposed as CMake options. These are all listed in the config generator template.
The project has been verified to build and function on ROS2 Humble (Ubuntu 22.04), Jazzy (Ubuntu 24.04) and Kilted (Ubuntu 24.04), on both x86-64 and aarch64 architectures, as well as WSL.
To run Cardinal Perception, you will need (REQUIRED):
- A
sensor_msgs::msg::PointCloud2topic providing a 3D LiDAR scan. - A correctly configured
config/perception.yamlfile (or equivalent) - see the related documentation for information on parameters.
Optionally, you may also need:
- A
sensor_msgs::msg::Imutopic providing IMU samples. This can help stabilize the odometry system, especially when an orientation estimate is directly usable as apart of each sample. - A set of
/tfof/tf_statictransforms provided byrobot_state_publisher. This is necessary when the coordinate frame of the LiDAR scan is different from the coordinate frame of the IMU, or if you want to compute odometry for a frame different than that of the LiDAR scan. - A customized launch configuration to support your specific robot setup.
Finally, to use the AprilTag detector for global estimates, you will need:
- A set of
sensor_msgs::msg::Imageand accompanyingsensor_msgs::msg::CameraInfotopic for each camera to be used. - A launch configuration file describing the behavior of the fiducial-tag system. See the provided config file for an example.
The following nodes are built, which can all be run individually or in a launch system:
perception_node- The core perception packagetag_detection_node- The apriltag detectorpplan_client_node- A service caller that can interface with foxglove studio cursor clicks
However, to fully configure the perception system, you will need to build and source the launch_utils package. All three packages can then be configured an run using the JSON config and following launch command:
ros2 launch cardinal_perception perception.launch.py <launch args...>For an advanced usage example, see this repo.
The build script exports compile commands which can help VSCode's C/C++ extension resolve correct syntax highlighting. To ensure this is working, paste the following code into the c_cpp_properties.json file (under .vscode directory in a workspace):
{
"configurations": [
{
"name": "Linux",
"includePath": [
"${workspaceFolder}/**"
],
"defines": [],
"compilerPath": "/usr/bin/gcc",
"intelliSenseMode": "linux-gcc-x64",
"cStandard": "c17",
"cppStandard": "c++17",
"compileCommands": [
"build/compile_commands.json"
]
}
],
"version": 4
}Last updated: 9/13/25
