Pickup and Handover Object to Person Using ArUco Markers

Hi all!

We recently created a demo showcasing Stretch autonomously picking up a hat off the floor and handing it over to someone. We call the system “SIRA” or socially interactive robot assistant, and built it on top of FUNMAP, ArUco marker detection, and the autonomy demos.

I am currently working at the INSPIRE Lab at the Spaulding Rehabilitation Hospital in Cambridge. Our lab is motivated to increase functional independence for those with high level spinal cord injuries. We have recently teamed with the Travis Roy Foundation and started a center for independence for those living with paralysis. We believe autonomous robots, like Hello Robot’s Stretch model, could be a key part of regaining independence and increasing quality of life. For example, someone who is paralyzed may have trouble picking things up from the ground, or they may rely on somebody else to accomplish this task for them. However, SIRA is able to pick an object up from the ground and hand it over to the person, eliminating the need for a caregiver and giving them a sense of independence they had previously lost.

This demo was built using FUNMAP, and the ArUco marker detection. SIRA uses two ArUco Markers, one on the object to be picked up (in our case the hat) and one on the person that the object is meant to be handed to (in our case Julia). SIRA localizes to its environment, then looks for the hat ArUco marker. Once this is found, SIRA may pick the hat up from the ground. Then SIRA looks for the other marker showing where the person is. SIRA is then able to locate the handover position, and hand the hat over to the person wearing the ArUco marker. For further instructions to try the demo out yourself, refer to the the README on the github page for SIRA (linked below).

I would like to thank Binit Shah for all of his help getting this demo up and running. Additionally, I would like to thank the rest of the Hello Robot team for their continued support throughout this project.

See it in action on YouTube: SIRA Hat Pickup Demo - YouTube

The code is open source on Github: GitHub - jcgangemi1/sira

For further instructions on how to use, refer to the README: sira/README.md at main · jcgangemi1/sira (github.com)

Sincerely,

Julia Gangemi

7 Likes

Exciting to see, thanks for sharing your code as well with the community!

1 Like

Hi, first that all you did a great job with this project, and second I want to ask If possible to use the code that you create to pick and place an object detected with the “Deep perception node” of the robot, which uses YOLOv· tiny to detect an object.
if it’s possible how you will do it? Sorry for the questions, but I am a bit new to creating code for robots.

Hello, thank you Humberto. Yes, you can use the ‘deep perception node’ to detect an object and pick it up. The reason we used the Aruco markers is because this way is more robust for our use, but you can use the ‘deep perception node’ as well. To do this, instead of launching the Aruco detection node, launch the deep perception node. The list of the objects that are detected will be published to a topic called “/objects/marker_array”, which is the same message type as the “/aruco/marker_array” topic published by the Aruco node. In the callback on the subscription to that topic, change the logic that filters out markers. Instead of filtering based on the marker ID, filter based on the marker object class (e.g. cat, dog). For each marker received, the “text” field would contain the class of the object. With these few changes, you can use sira with the ‘deep perception node’ instead.
(Thank you for your help with this Binit)