Hello everyone. We’ve gotten a number of inquiries from the community about integrating a Nvidia Jetson with Stretch. We’ve also seen a few enterprising groups already working with a Jetson on Stretch. Given the strong interest, we’ve decided to develop a ‘Jetson Backpack’ accessory for Stretch. The accessory will be an easy way to augment the robot’s existing compute now. We’re still developing the project details around the accessory and would love to get your input.
Our current (tentative) plan is to design the Jetson Backpack around the latest and most powerful Jetson AGX Orin Developer Kit. This would bemounted on Stretch’s base and be wired to the robot’s onboard computer, an Intel NUC, via ethernet. The Orin would take power from Stretch’s on-board batteries.
Below are a few questions to help start a discussion. We look forward to hearing your perspectives!
What issues do you face today deploying Deep Learning models with Stretch? What models would you like to deploy on-robot?
Background:
- The current state-of-the-art Deep Learning models, particularly computer vision and NLP models, require deployment on GPU-accelerated devices for high-performance inference rates.
- Stretch comes with an i5 processor that does not have a discrete GPU. To supplement the hardware, we plan to mount a Jetson dev kit, consisting of an ARM processor, discrete GPU based on Nvidia Ampere architecture, and a variety of IO, to the Stretch robot and pass high-performance inference tasks through the Jetson.
- Large models, such as language models, often won’t run or run at low performance on low-end Jetson or RTX GPUs. What models are essential for you to run on-robot? What rate of inference do you need to see? Do you plan to train neural networks on the robot?
What is your preferred method of running Inference between systems?
Background:
- The PyTorch platform is one of the preferred platforms for Deep learning research and supports deploying some of the latest PyTorch models on the GPU-enabled Jetson platform.
- Other popular deep learning libraries include TensorFlow, Keras, and libraries from other languages besides Python.
- We plan to use one of these deep learning libraries and use the ROS networking system to chauffeur data between the NUC and the Jetson easily.
- We are also open to suggestions for implementing other inference message-passing methods through the Network that can work without ROS.
What is your opinion on the implementation of Isaac ROS + Stretch ROS2?
Background:
- Isaac ROS consists of various ROS2 hardware-accelerated packages that can provide autonomous capabilities (e.g. vSlam, 3D Construction & Navigation).
- We’re considering integrating support for Isaac ROS with the packages in stretch_ros2 to take advantage of Isaac ROS’s offerings and Stretch’s Jetson Backpack.
- We are open to suggestions on the Isaac ROS applications you would like to see work with Stretch out of the box.