Thanks for the reply.
I think pre-emption is not fundamentally in conflict with queuing (one should be able to cancel or dequeue with a good queueing API).
I think, my primary issue with the current interface is that it is doing high-level PID control, it isn’t a low-level API that is simply sending current or signals to the motor – and yet it doesn’t behave like a high-level API.
As a high-level and long-running trajectory-execution API (with internal PID control, and soon to come splined waypoint control), it should be able to give a richer status update as to where it is within it’s trajectory execution.
What we are doing now is to ask it to execute a trajectory (like
rotate), and then getting the status of executing that trajectory in a very low-bandwidth / unreliable status API, with a lot of uncertainty. This seems very very round-about.
Ideally, we’d have something more in the order of:
status_handle = base.translate_by(...)
And then the
status_handle gives us a status of the corresponding command:
I feel like this is going to be more important as you introduce even more long-running, high-level and complex trajectory control with the new upgrade.
For our use-cases, we’re actually okay with imperfect movement, and the general approximate movements of the optical encoders are fine. We don’t need precise servo-like control.
So, we commonly want pseudo-code like:
future1 = bot.translate_by(...)
future2 = bot.rotate_by(...)
future3 = bot.translate_by(...)
return [future1, future2, future3]
# doing exploration
futures = move_robot(random_location)
safe_moving_thread(futures, check_obstacle) # stops the movements if obstacle is detected
objects = detect_objects()
if objects is not None:
[future.cancel() for future in futures]
futures = move_robot(objects.location)
I am simplifying quite a bit, as most of the hello API is used within top of our navigation/planning/low-level services, but hopefully it gives a sense of what we’re trying to do.
We care about clean robot trajectories, where the robot is moving constantly, and doesn’t look unnatural in it’s movement, and we are fine with imperfect movement / error, as we have a SLAM implementation that deals with it and plans it’s next things to do just fine (i.e. the error doesn’t accumulate over time, even if we didn’t reach the exact point that we wanted the robot to go to)