Recording video from the Realsense camera

I may just be missing it in the docs, but is there a simple way to capture video and/or pictures from Stretch’s camera? I don’t need to process or stream anything in real time - just saving them locally while I teleoperate Stretch then downloading them later would be great. Using a lightweight Python script or command line utility would be ideal (I’m rusty on ROS and not using it at the moment).

I tried some really quick Python scripts using OpenCV to see if it would basically find the camera as a webcam (I noticed some promising devices listed in /dev such as /dev/video0), but it didn’t seem to be able to connect properly with the default settings.

Thanks in advance for any suggestions!

Hi @jdelpreto, here’s a Python command line tool that can visualize the Realsense camera’s color/depth streams and optionally save them to a video. It builds off the pyrealsense2 code examples and uses the OpenCV VideoWriter API to save a video of the captured stream.

Here’s how you can get it:

$ pip2 install -U hello-robot-stretch-body-tools

Then, you can teleop the robot using keyboard/xbox teleop and launch the Realsense tool in a separate terminal.

$ stretch_realsense_jog.py --save ./capture.avi

Here’s an example video captured from the tool.


Hope this addresses your need for a Python script to save the camera’s stream locally. If you decide to move to ROS, you may find the “rosbag” utility useful to save the topics in which the camera’s data is streamed.

2 Likes

Awesome, the script seems to be working perfectly! Thanks @bshah for making it and for the pointers to more information, I really appreciate the quick response.

1 Like

Thanks again for the script! I ended up making some adjustments as I’ve been using it, and thought I’d send along my current version in case it’s useful for anyone else with similar use cases:

Most of the notable updates are below for reference:

  • Enable longer videos by writing frames to the video file as they’re received, instead of saving the whole buffer at the end. This may make it harder to maintain target frame rates at higher resolutions, but it avoids memory issues; when storing a buffer of frames, the program was killed by the OS after 1-2 minutes due to memory usage. And continuous writing also avoids losing data if the program encounters an error for some reason. A small buffer is still maintained though, in case some extra processing pipelines are added on top of this.
  • Options to save a color-only video, a depth-only video, and/or a video with color and depth side by side.
  • Select desired resolutions for the color and depth images, ranging from 640x480 to 1920x1080. If the color and depth images are different resolutions, the smaller one will be padded with black.
  • Option to save the depth-only video at a lower frame rate than the color video (helps to reduce processing time and make the actual frame rate closer to the target rate).
  • Flip images horizontally since they’re mirrored by default, and optionally apply brightness equalization.
  • Prepend a timestamp to the output filename to avoid accidentally overwriting files.
  • Append the actual frame rate and the real-time duration to video filenames after completing a recording. This information can be used in post-processing to make the videos play at the correct speed; since the actual rates will be different than the nominal rates the videos were saved at, playback speed will be inaccurate by default.
2 Likes

Awesome! These are some great additions @jdelpreto; continuously writing to avoid running out of memory is a good idea! Would you mind if I folded these improvements back into the code, so it’s available to everyone out of the box. Or if you’d like to open a PR against develop, I’d love for you to get credit for your contributions.

Sure, that sounds great! I can take a look at making a quick PR in the next few days. The script probably isn’t standardized with the coding conventions of the rest of Stretch’s codebase, but I’ll just submit what I have for now and you can feel free to edit it / clean it up if desired?

Sounds great! I’ll keep an eye out for it.

Follow up for anyone who comes across this thread. Joseph’s improvements got merged in (thank you Joseph!) and the tool is still available as a command line tool, but it was renamed to stretch_realsense_visualizer.py.

It comes preinstalled on your robot under the hello-robot-stretch-body-tools pip package. You can use it by opening a terminal and running the following command:

$ stretch_realsense_visualizer.py --save ./capture.avi

For those looking to write their own code, the tool’s source code is a good reference for how to stream data from the Realsense camera in Stretch’s head.

3 Likes