Struggle with startup with stretch robot

Are there any specific specifications that the microphone should include?

Hi @Mallak_Alqaisi,

There are no specific microphone specifications required from our side. Any standard USB microphone should work. A 3.5 mm microphone can also work if the GPU computer has a compatible audio input port.

In general, as long as the microphone is detected correctly by the GPU computer, it should work with the voice interface.

Also, thanks for bringing this up, we realized this requirement was not clearly mentioned in our documentation, so we will update the docs to include it.

Thanks again for pointing it out!

Best,
Jason

Hi Jason,

Thank you so much. I’m in the process of buying a microphone, and until I get it, what demonstration would you recommend I do now?

HI @Mallak_Alqaisi,

You could still test the same demo without the --use_voice flag.
That way you can still test this without relying on the microphone.

Best,
Jason

Hi Jason,

When i run this command : python -m stretch.app.ai_pickup --use_llm i got this:

Then i write pick up the yellow ball and he did it but when i make modifications in the code hand_over_task.py in this line: finished_handover.configure(

        message="The object has been successfully delivered to you. Thank you!", sleep_time=5.0

    )

I didn’t see any changes.

I want my robots execute many tasks in the same time for example: pick up the ball and hand it to person and so on..

I have two more questions:

  1. How can the movement of each part of the robot be controlled for?

  2. I noticed that the robot’s movement is slow; how can I speed it up?

Hi @Mallak_Alqaisi,

Thanks for the details.

What you are seeing is likely because the hand_over_task.py code is not being triggered in your test. When you run:

python -m stretch.app.ai_pickup --use_llm

and type pick up the yellow ball , the robot may only execute the pickup behavior. In that case, changes in hand_over_task.py would not appear since the handover step is not being called.

If you want the robot to perform multiple actions (e.g., pick up the ball and hand it to a person), try prompting something like:

  • “Bring me the yellow ball”
  • “Fetch the yellow ball for me”

If possible, could you share the exact prompt you used and the terminal logs after running it?

Best,
Jason

Hi @Mallak_Alqaisi

Yes, at a high level, the movement of each part of the robot depends on the initial LLM output. The LLM interprets the user command, the executor selects the corresponding task, and then the task/operation code determines which joints are commanded to move (for example base, arm, lift, wrist, or gripper).

For the speed question, could you please clarify whether you mean:

  • Is it a specific joint (base, arm, lift, wrist, gripper)?
  • Or do you mean the pauses between actions during task execution?

Best,
Jason

Regarding the speed question, I mean that when I run any command on the robot to perform a task, I notice that the robot moves slowly, whether in its base, arms, or any other part. How can I increase its speed, especially in its base?

Hi @Mallak_Alqaisi,

Regarding the speed, the base velocity on Stretch is intentionally limited for safety reasons. The robot’s base velocities are designed to be clamped to a maximum of around 0.3 m/s, which helps ensure safe operation and stable navigation.

So even when executing tasks, the system will generally keep the base motion within that range rather than moving faster. This is expected behavior and part of the safety design of the platform.

If you are noticing pauses or delays between movements, it may also be related to how the task execution is sequencing actions rather than the actual joint speeds.

Best,
Jason

I now have a microphone, and I have started working on this demonstration; everything is going well. When I said to him, “Bring me the yellow ball,” he picked it up, but he continued to search or navigate the person—even though I was standing directly in front of him.

Could you guide me again through the correct steps that follow?

I feel a bit confused. :smiling_face_with_tear:

Hi @Mallak_Alqaisi,

This error means the GPU ran out of memory while trying to run the LLM. Most likely, a previous demo or process is still using the GPU.

If this happens again, you can check GPU usage with:

nvidia-smi

If you see other processes using GPU memory, stop them, then run the command again.

You can stop them using their PID:

kill -9 <PID>

Best,
Jason

Great to hear the microphone setup is working and that the robot is able to pick up the object. You’re definitely following the correct steps here.

For the handover step, a few things that can help:

  • Make sure you are clearly visible in front of the robot

  • Keep your full body within the camera’s view

  • Try to remain still while it is searching

  • Ensure the environment has good lighting

If you’re still seeing the same behavior, could you please share any logs from your run? That would help us better understand what’s happening and debug further.

I have successfully completed this demonstration, but I now need your assistance to create a new task. How and where should I modify the code, and subsequently run it with the new updates?

This is what makes me feel somewhat confused.

Hi @Mallak_Alqaisi,

Great to hear you were able to successfully complete the demonstration, that’s a big step!

For creating a new task, we have dedicated documentation that walks through how to do this, including where to modify the code and how to run it afterward.

I’d really recommend taking the time to go through this guide carefully and understand it end-to-end.

It includes an example of implementing a new handover task, this is a great template you can follow when building your own.

A couple of additional notes:

  • Depending on what you’re trying to achieve, you may not need a completely new task, sometimes reusing or slightly modifying an existing one is enough.

  • We already have a variety of operations available, so it’s worth exploring those first to see if they cover your use case.

  • Since stretch_ai is installed in editable mode, after making changes you can usually just rerun the application, no need to rebuild the workspace.

Overall, I’d strongly recommend going through the doc carefully first, as it will give you a solid understanding of how everything fits together.

If you get stuck at any point, feel free to share what you’ve tried and I’d be happy to help.

Best,
Jason

I got this error when I’m trying to home the robot , I reboot the the robot but I’m still have the error.

Just to confirm, when you say you rebooted the robot, was that a full robot power cycle?

The error message is specifically asking to reboot the Dynamixel servos, which is a different step. Could you please try running:

stretch_robot_dynamixel_reboot.py

and then try homing again with:

stretch_robot_home.py

If it still fails, please send me the full output from both commands.

When I’m trying to move and control the robot by using tele-operat, it doesn’t work ?

Hi @Mallak_Alqaisi,

Would you mind opening a new forum thread for this issue? It looks like it’s separate from the current discussion, and having its own thread will help us track and troubleshoot it more effectively.

When you do, could you please share a bit more context about what’s failing? For example:

  • What exactly happens when you try to teleoperate the robot
  • Any error messages or logs you’re seeing
  • Steps you’ve already tried

Also, for anything related to web teleop, we have a dedicated Troubleshooting section that walks through common issues and fixes. You can find it here (under the Troubleshooting section).

Once you share those details, we’ll be able to help you much faster.

Thanks so much!