Motion Interpolation / Trajectory generation

Please can someone point me in the right direction? I'm hoping that this is a solved problem for somebody out there.

I need a function that will limit Acceleration and Velocity, generating at each sample time a new position that meets the V A constraints. The input position is kept within bounds.

A vital design requirement is low signal latency, ideally I want a current answer for each sample.

I started by looking at motion interpolators of various kinds, e.g.

formatting link

but interpolators assume that you begin with a fixed start and stop position, and from those generate a series of interpolation points (within the sample time) that will take you from start to stop, such that you reach the end position with velocity zero.

I started with code that calculated the velocity and acceleration and limited them:

deltaT = time step between points start with cur_pos = cur_vel = cur_acc = 0 max_acc and max_vel set to +ve values, don't change during runtime

PSEUDOCODE:

function LimitVA ( new_pos ) // calculate actual distance from current position dist = cur_pos - new_pos // velocity needed to get there from here vel = dist / deltaT // acceleration needed to reach that velocity acc = (cur_vel - vel ) / deltaT // Limit the acceleration to allowed values acc = limit ( acc, -max_acc, max_acc ) // calculate new velocity (using "v = u + a*t") vel = cur_vel + acc * timestep // Limit the velocity to allowed values vel = limit ( vel, -max_vel, max_vel ) // calculate new position cur_pos = cur_pos + vel * deltaT // remember velocity cur_vel = vel // result return cur_pos

Now, this version has terrible overshoot, followed by ringing. For example, using a step function as input, the output position accelerates to the end point, and is going far too fast to stop there.

So I modified the code to take into account the 'time to stop' as shown:

PSEUDOCODE:

function LimitVA ( new_pos ) // how long to stop at current velocity? time_to_stop = abs ( cur_vel / max_acc ) // where will we be then? ( s = u*t + 0.5*a*t^2 ) projected_move = cur_vel * time_to_stop + 0.5 * cur_acc * time_to_stop^2 dist = cur_pos - new_pos // change velocity to reflect possible stop position vel = ( dist - projected_move ) / deltaT acc = (cur_vel - vel ) / deltaT acc = limit ( acc, -max_acc, max_acc ) vel = cur_vel + acc * timestep vel = limit ( vel, -max_vel, max_vel ) cur_pos = cur_pos + vel * deltaT cur_vel = vel return cur_pos

Now this is better, but it is 'unstable'. For some values of 'max_vel' and 'max_acc', and for different position sequences, the output can 'hunt' around a reasonable value. Also, this code assumes that we will stop. - which may not be true. With slowly changing signals, sometimes the result is a series of steps - which I want to avoid.

Variations on the above code have not given significantly better results.

Now, I'm looking at other possibilities, which have taken me in the direction of curve fitting. But I'm not sure if I've missed something in the above.

Also, choosing a curve fitting algorithm, given the desired constraints on velocity and acceleration, looks like a non-trivial task.

So, if anyone out there has a suitable alternative, advice where I may be going wrong, or can point me at a web resource, I would be very grateful.

Thanks, TrissT

Reply to
TrissT
Loading thread data ...

You're trying to control some physical thing, right? I'm taking the liberty of cross-posting this answer to sci.engr.control, so the experts there can hack it apart, along with the experts here on comp.dsp.

This can be done, but the mathematics -- at least the way that I know how to do it -- get a bit odd. So forgive me if it's a bit mind-bending.

If your plant were a perfect double integrator and if you could measure it's position and velocity perfectly, this would be a moderately easy problem in optimal control. I'll start my development with that assumption, then later on I'll break it and show you what to do about it.

The acceleration part of the control rule is easy: if your error is negative, accelerate positive; if your error is positive, accelerate negative; if your error is zero, don't accelerate. So:

{ -A target - pos < 0 a = { 0 target - pos = 0 { A target - pos > 0

Of course, this results in your velocity getting too high. So, modify the rule to take that into account (note that I'm ignoring the problem of getting back to the correct velocity if you go over -- if we can assume a perfect plant we can assume that'll never happen):

{ -A target - pos < 0 && velocity < Vmax a = { 0 target == pos || velocity == -Vmax || velocity == +Vmax { A target - pos > 0 && velocity > -Vmax

This still leaves you with significant (ehem) overshoot when you reach your target. To solve this one you need to look at how your plant decelerates:

dv/dt = -A,

dx/dt = v.

This is just a simple state-space system with two states. If you're twisted, you can eliminate the time variable in the above differential equation, reducing it from a two-state linear differential equation to a one-state nonlinear one:

dv/dx = -A / v.

Since it's a single-order diff. eq, it's pretty easy to guess at a solution:

v = sqrt(b(x + c)), therefore

dv/dx = (b / 2) / sqrt(b(x + c)).

The two dv/dx terms can be equated:

(b/2) / sqrt(b(x + c)) = -A / v,

which implies that b = +/- 2A (note the ambiguity of the sqrt function).

We want the thing to come to a stop at pos == target, so it's quite sensible to set c = -target. Keeping in mind that x = pos:

v = +/- sqrt(+/- 2A(pos - target)).

We still have that +/- term, but that can be resolved with logic: if pos < target then we want a positive velocity, and if pos > target then we want a negative velocity. We'd also kind of like to have real terms inside of the square root. All that's left is to accept the reality that the above expression is for a _target_ velocity, and we have:

{ Vmax sqrt(2A(pos - target)) > Vmax { sqrt(2A(pos - target)) pos < target v_target = { 0 pos == target { -sqrt(2A(target - pos)) pos > target { -Vmax -sqrt(2A(target - pos)) < -Vmax

Now we just remember that we're dealing with an absolutely perfect plant, so we get

{ +A v < v_target a = { 0 v == v_target . { -A v > v_target

What could be easier?

(A system that actually works could be easier).

Remember that we started with the assumption that we have an absolutely perfect plant. If we have a plant that's in anything _at all_ other than absolutely perfect, the above rigamarole is just mathematical fantasizing. Why? Because it has infinite gain in not one, but two places.

The first place where the infinite gain is apparent is in the calculation of the acceleration command from the velocity -- the acceleration command _jumps_ from -A to +A (forget about 0) at the slightest discrepancy -- so you'd have a system that oscillated madly right there.

The second place where the infinite gain creeps in is where you take the square root of the position error. This term has an infinite gain when the position error gets close to zero, so once again you'll have a system that oscillates madly.

You could incorporate the above rules into a sliding mode controller, and if you didn't mind the bad noises and hot motors you could call it a day and go home.

Or you could stabilize the beast.

Stabilizing in velocity is easy. Just change the acceleration command equation from infinite gain to some finite gain:

{ +A (v_target - v)*kv > A a = { (v_target - v)*kv otherwise . { -A (v_target - v)*kv < -A

This will give you a controller that will -- assuming you set kv right -- rapidly accelerate to your desired maximum velocity, then start smoothly decelerating to your desired position -- then start oscillating like mad.

I am _not_ going to do all the remaining math here, because it gives my brain cramps. But I'll outline it, and leave the exact solution as an exercise for the reader. The problem you're dealing with is the two- sided square root function that you're trying to use (pardon the poor resolution of ASCII art):

------------------------- \ - \ \ |

---------------------------------|------------------------------- | \ \ - \ -------------------------

What you need to do is to flatten the thing out in the middle. You do this by separating the two pieces, and laying a line of your desired slope in the middle:

------------------------- \ - \ add this section in \ / |\

-------------------------------|-\-|----------------------------- / \| move this over \ \ -- this, too - \ -------------------------

So you end up with a target velocity curve that has

{ Vmax sqrt(2A(pos - x1 - target)) > Vmax { sqrt(2A(pos - x1 - target)) pos + x2 < target v_target = { (target - pos)*kp otherwise { -sqrt(2A(target - pos + x1)) pos - x2 > target { -Vmax -sqrt(2A(target - pos + x1)) < -Vmax

Your task is to find the x1 and x2 so the lines all sit nicely when you've chosen a kp.

Notice that the nice thing about this controller is that when your v_target calculation is in the center region, it is just a plain old PD controller, so you can analyze it for stability as such (and you can even add an integral term to the velocity servo, if you need to). All of the nonlinear stuff just makes sure that the controller puts the brakes on far enough ahead of the target position that if it can stop on a dime, it will.

(someone will come in and whine about this controller requiring lots of math -- well, yes it does. But it's not that much more than what's required to do a trapezoidal profile from point A to point B: the biggest difference is that you're constantly updating the thing on line instead of pre-calculating a profile. If you're concerned about having to do the square root online, you can pre-compute the v_target as a function of position offset in a variety of ways depending on your desired tradeoff of code complexity, portability, memory usage, speed, etc.).

Reply to
Tim Wescott

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.