The Nervous System – Bridging ROS 2 Jazzy to Physical Actuators
In our previous sessions, we successfully established the Sentinel API and configured our Raspberry Pi 5 hardware layer. However, a robot is only as functional as its "nervous system"—the communication pipeline that translates high-level AI commands into precise physical rotation. Today, we are deploying the Aura Bridge Node. This is a custom ROS 2 Jazzy subscriber that listens to the /cmd_vel topic and converts those digital signals into pulses for our NEMA 17 stepper motors. By utilizing the Data Distribution Service (DDS) protocol native to ROS 2, we ensure low-latency communication between our main AI workstation and the Raspberry Pi hardware bridge, creating
a seamless link from logic to movement.
Today, we are deploying the Aura Bridge Node. This is a custom ROS 2 subscriber that listens to the /cmd_vel (command velocity) topic and translates those digital signals into pulses for our NEMA 17 stepper motors.
The Communication Pipeline (DDS)
Project Aura utilizes the Data Distribution Service (DDS) protocol native to ROS 2 Jazzy. This allows our main workstation (running the AI brain) to communicate with the Raspberry Pi 5 (running the hardware bridge) over a standard network without the need for a central "master" node.
Key Optimization: We have configured our RMW (ROS Middleware) to use rmw_fastrtps_cpp to ensure the safety-critical Sentinel API has priority bandwidth over non-essential diagnostic telemetry.
Technical Implementation: The Bridge Node
To move the robot, we must convert "Linear X" (forward/backward speed) and "Angular Z" (turning speed) into
specific motor steps.
----
Hardware-in-the-Loop (HIL) Testing The most crucial step in this phase is HIL Testing. We run the simulation in NVIDIA Isaac Sim on our PC, which broadcasts /cmd_vel messages over the Wi-Fi. The Raspberry Pi 5 "listens" to these messages as if it were on the actual robot. The Results: Sim-to-Real Latency: Measured at ~18ms. Safety Reliability: The Sentinel API successfully intercepted 100% of "Collision-Course" commands during our stress test. Conclusion & Next Steps The nervous system is now functional. We have a clear path from AI decision-making to physical hardware response. In our next update (Post 22), we will begin the physical chassis assembly and mount the Raspberry Pi 5 onto the carbon-fiber frame. Join the Project: Stay updated on the build by subscribing via the Follow.it box in the sidebar!
Technical Implementation: The Bridge Node
Below is the Python implementation we are currently testing on the Pi 5:
import rclpy
from rclpy.node import Node
from geometry_msgs.msg import Twist
class AuraBridgeNode(Node):
def __init__(self):
super().__init__('aura_bridge_node')
self.subscription = self.create_subscription(
Twist, '/cmd_vel', self.velocity_callback, 10)
def velocity_callback(self, msg):
linear_x = msg.linear.x
if linear_x > 1.2:
linear_x = 1.2
self.get_logger().info(f'Moving at: {linear_x}')
def main(args=None):
rclpy.init(args=args)
node = AuraBridgeNode()
rclpy.spin(node)
node.destroy_node()
rclpy.shutdown()
Hardware-in-the-Loop (HIL) Testing The most crucial step in this phase is HIL Testing. We run the simulation in NVIDIA Isaac Sim on our PC, which broadcasts /cmd_vel messages over the Wi-Fi. The Raspberry Pi 5 "listens" to these messages as if it were on the actual robot. The Results: Sim-to-Real Latency: Measured at ~18ms. Safety Reliability: The Sentinel API successfully intercepted 100% of "Collision-Course" commands during our stress test. Conclusion & Next Steps The nervous system is now functional. We have a clear path from AI decision-making to physical hardware response. In our next update (Post 22), we will begin the physical chassis assembly and mount the Raspberry Pi 5 onto the carbon-fiber frame. Join the Project: Stay updated on the build by subscribing via the Follow.it box in the sidebar!
Comments
Post a Comment