Autonomy Video Details

Hello everyone,

Below, I provide details about the autonomy demos that ship with Stretch in order of their appearance in our autonomy compilation video.

We intend for these demos to serve as inspirational examples that others can build upon. We wrote all of the code in Python and released it as open source on GitHub.

Best wishes,
Charlie

Charlie Kemp, PhD
co-founder & CTO
Hello Robot Inc.
http://charliekemp.com


Hello World (whiteboard writing)

code
full video

In this demo, Stretch writes HELLO on a whiteboard. Stretch first uses its 3D camera to detect the nearest vertical surface or cliff, and then rotates itself into alignment. Stretch performs this alignment before writing each letter, which might be overkill. When moving the dry erase marker into contact with the whiteboard, Stretch uses motor effort to decide when it has made good contact.

Stretch’s cliff and surface alignment behavior works well in a variety of circumstances as demonstrated in a video of Stretch orienting itself to various parts of a home.


Object Grasping

code
full video

In this demo, Stretch attempts to grasp an isolated object from a flat surface. Stretch uses its 3D camera to find the nearest flat surface using a virtual overhead view. It then segments significant blobs on top of the surface. It selects the largest blob in this virtual overhead view and fits an ellipse to it. It then generates a grasp plan that makes use of the center of the ellipse, the ellipse’s major and minor axes, and the mean height of the blob.

Once it has a plan, Stretch orients its gripper, moves to the pregrasp pose, moves to the grasp pose, closes its gripper based on the estimated object width, lifts up, and retracts.

This method is closely related to the straightforward, yet surprisingly effective grasping method, the Healthcare Robotics Lab reported in the following paper. This paper also describes a surface edge alignment method that inspired the development of Stretch’s cliff and surface alignment behavior.

EL-E: An Assistive Mobile Manipulator that Autonomously Fetches Objects from Flat Surfaces, Advait Jain and Charles C. Kemp, Autonomous Robots, 2010.


Surface Wiping

code
full video

In this demo, Stretch moves a microfiber cloth across a surface while avoiding obstacles. Stretch uses the 3D camera to find and segment the nearest flat surface using a virtual overhead view. It then fits a plane to the segmented surface and detects potential obstacles. After this, it makes a wiping plan using the virtual overhead view to cover the surface while avoiding obstacles.

Once it has a plan, Stretch operates like a Cartesian robot. It reaches out to the start point and then moves down until it detects contact with the surface based on motor effort. Then, it executes the wiping plan.


Drawer Opening

code
full video

In this demo, Stretch pulls open drawers. The demo assumes that Stretch is already aligned with the drawer’s pulling direction and its handle. Stretch reaches out until it detects contact using motor effort. It then moves up or down, depending on what the user selected, until it detects contact using motor effort. It then attempts to pull back a fixed distance with a relatively high threshold on motor effort. The high threshold allows Stretch to use a higher pulling force instead of detecting the pulling force as undesirable contact.

The use of a hook with force-sensing move-until-contact behaviors was inspired by the following paper from the Healthcare Robotics Lab.

Pulling Open Doors and Drawers: Coordinating an Omni-directional Base and a Compliant Arm with Equilibrium Point Control, Advait Jain and Charles C. Kemp, IEEE International Conference on Robotics and Automation (ICRA), 2010.

While this demo requires the user to first position Stretch, I would expect that greater autonomy could be achieved using Stretch’s cliff and surface alignment behavior (video) and something like the method in the following paper from the Healthcare Robotics Lab.

Autonomously Learning to Visually Detect Where Manipulation Will Succeed, Hai Nguyen and Charles C. Kemp, Autonomous Robots, 2013.


Mapping, Navigating, and Reaching to a 3D Point

code
full video

In these demos, Stretch autonomously creates a 3D map of a room and also navigates and reaches to a user-selected location on the map. These demos use FUNMAP (Fast Unified Navigation, Manipulation And Planning). FUNMAP has a lot to it, some of which you can read about in the README.md file on GitHub.

A few points worth noting follow:

  • While mapping, Stretch selects where to scan next in a non-trivial way that considers factors such as the quality of previous observations, expected new observations, and navigation distance.
  • In the demo, Stretch uses position control to drive places instead of velocity control, which is the de facto community standard. Although it’s not shown, Stretch is also capable of using velocity control and supports the standard ROS navigation stack.
  • The plan that Stretch uses to reach the target 3D point has been optimized for navigation and manipulation. For example, it finds a final robot pose that provides a large manipulation workspace for Stretch, which must consider nearby obstacles, including obstacles on the ground. Interestingly, there may be an emergent phenomenon of Stretch aligning itself with the edge of the surface on which the target 3D point is sitting. I suspect this is because this pose provides a larger workspace.
  • Even a single head scan performed by panning the 3D camera around can result in a very nice 3D representation of Stretch’s surroundings that includes the nearby floor. Only the mast gets in the way of a panning head scan, which is why Stretch rotates in place and performs a second panning head scan to fill in its blind spot.

Deep Perception

code
full video

Stretch comes with demos for object detection, face & facial landmark detection, and body landmark detection using open source deep learning models from third parties. We provide custom Python 3 code that interfaces with them and combines their 2D detections with 3D sensing. There is also an option to use an Intel Neural Compute Stick 2, which works with at least one of the models.


Object Handover

code
full video

This is a simple demo of object handovers. A lot more could be done with it. Stretch performs Cartesian motions to move its gripper to a body-relative position using a good motion heuristic, which is to extend the arm as the last step. Stretch doesn’t attempt to avoid obstacles, although it will detect significant contact using motor effort and stop. In the video, I had the gripper holding the object relatively lightly, which made it easy to remove from Stretch’s grasp. There would be a number of ways to have an active release, including using the accelerometers in the wrist and motor efforts.

To me there are two especially interesting things about this demo:

  1. These simple motions work well due to the design of Stretch. It still surprises me how well Stretch moves objects to comfortable places near my body, and how unobtrusive it is.
  2. The goal point is specified relative to a 3D frame attached to the person’s mouth estimated using deep learning models (shown in the RViz visualization video). Specifically, Stretch targets handoff at a 3D point that is 20cm below the estimated position of the mouth and 25cm away along the direction of reaching. I tried a number of different target points, including targets based on body landmarks, but this seemed to work the best out of the methods I attempted.

This approach is strongly influenced by the following paper from from the Healthcare Robotics Lab, which used a Viola-Jones face detector when delivering objects to people with disabilities.

Hand It Over or Set It Down: A User Study of Object Delivery with an Assistive Mobile Manipulator, Young Sang Choi, Tiffany L. Chen, Advait Jain, Cressel Anderson, Jonathan D. Glass, and Charles C. Kemp, IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2009.

This post is most needed one, Thanks! It would help other people if we link this detail with repo as well.

I’m glad you like the post, @sitaneja! We have plans to improve our documentation on GitHub, but I think linking to this post is a good idea! Thanks! I just added links to two README.md files in the stretch_ros repository.

Best wishes,
Charlie

Charlie Kemp, PhD
co-founder & CTO
Hello Robot Inc.
http://charliekemp.com

2 Likes