Infinitely(?) Stiff Differential Equations

Are there any algorithms out there for solving differential equations where the derivatives are discontinuous? I am attempting to simulate systems that include friction and backlash. The problem is that the models for both friction and backlash are, for practical purposes, both infinitely stiff and with variable structure (in both cases there is at least one state that is sometimes algebraically dependant on another and sometimes not). While I am finding some great ways to crash MatLab and Octave, I'm not managing to simulate my systems in a really satisfactory manner.

I'm thinking that their ought to be a solver out there that understands such infinitely stiff systems, and which will solve right up to the discontinuity, then solve on the "other side" of the discontinuity, perhaps with another system discription (to account for the variable structure), without ever trying to resolve the behavior of the system _at_ the discontinuity.

I know from experience that if I'm thinking it's a clever idea _now_ that it's already been done, probably before 1950. Does this ring a bell with anyone? Do you have some names of some methods that I might use, or books or articles that I might read?

Thanks in advance.

-------------------------------------- Tim Wescott Wescott Design Services

formatting link

begin 666 Tim Wescott.vcf M0D5'24XZ5D-!4D0-"E9%4E-)3TXZ,BXQ#0I..E=E

Reply to
Tim Wescott
Loading thread data ...

The makers of gPROMS

formatting link
claim that their software handles discontinuities. In my experience they can slow things down, but I guess that's inevitable.

So there's at least one algorithm :)

Also Simulink should handle discontinuities. See the "Using Simulink" document at

formatting link
L.

Reply to
Lars

Hmm. What they say contradicts my experience, and I'm not a current owner of MatLab (I'm waiting for a customer to come along with a project to justify it). I know that when I've tried this before, with blocks that should handle it, Simulink just stops at the discontinuity and that's that. I'll have to try it again when I have Matlab.

Thanks.

Reply to
Tim Wescott

I don't have Matlab and Mathcad does not handle limits so I roll my own. I use a system of differential equations with rules to check to make sure the changes do not go out of bounds. I also have a lot of 'if' statements to handle things like static friction and dynamic friction. Also pressures can't go below 0 and positions can't exceed the length of the cylinder.. I use runge kutta to do the actual iterations. It takes only a few pages to do a valve and a cylinder class for hydraulic simulations.

Peter Nachtwey

Reply to
Peter Nachtwey

In article , "Tim Wescott" writes: >Are there any algorithms out there for solving differential equations where >the derivatives are discontinuous? I am attempting to simulate systems that >include friction and backlash. The problem is that the models for both >friction and backlash are, for practical purposes, both infinitely stiff and >with variable structure (in both cases there is at least one state that is >sometimes algebraically dependant on another and sometimes not). While I am >finding some great ways to crash MatLab and Octave, I'm not managing to >simulate my systems in a really satisfactory manner. >

the usual way to deal with discontinuities is to detect them using some indicator functions (e.g. sign changes of some functions), try to locate the time point of this precisely, stop integration there and immediately resume taking the current solution as the new initial value. the NAG library has such a code. MATLAB's odesuite has the "event option" to do that. otherwise.. design one yourself. this is not too hard . if you can characterize these points by functions changing their sign there, then compute these functions along with the solution and try "to look ahead' for the zero. inverse interpolation using the stored function values is a good means to detect those coming zeroes ahaed (e.g. with inverse quadratic and cubic interpolation I once had much success ). you must of course use an integrator with stepsize control. a strong discontinuity will reveal itself by a very strong reduction of the stepsize. a very crude alternative solution to deal with the situation is to provide a minimal stepsize and step over the time axis at least with this one. this will jump over the point of discontinuity, producing some error in the order of this step and lateron the stepsize will grow up automatically.

hth peter

Reply to
Peter Spellucci

My guess us that dassl, the variant with root finding, should be suitable. Fortran code is at

formatting link

Hope it helps. Jörgen

Reply to
Jörgen Tegnér

Matlab can definitely do this with the ode suite (at least using ode45). In your system equations you can have conditionals (if statements) which allow for discrete system input. You can reference time or any system variables in those conditionals, and plug the output directly into a differential.

tim

Reply to
Tim Franklin

Don't try to automate this with a general solver. You'll just get frustrated.

Lots of physics books solve ode's with discontinuous coefficients or even delta-functions in the coefficients. Basically what you do is integrate by hand in a small region around the bad point, then use boundary conditions to join the solutions. For example,

y'' + ( k*2 - A*\delta (x) ) * y = 0

(here the bad point is x=0). Integrate from -a to a (a is small) and get

y'(a) - y'(-a) + 2*a*k^2 *y(0) = A * y(0)

so the first derivative is discontinuous, and the discontinuity is known, in the limit a -> 0. Since the discontinuity is a finite jump, y(x) is continuous at x=0.

In this simple case we can solve by hand: let

y(x0) = t*cos(kx) + u*sin(kx) ;

these clearly solve the equation on either side of x=0. Now we have

r = t ( y(0-) = y(0+) )

k*(u-s) = A*r

Hence the solution is

r*cos(kx) + s * sin(kx) , x < 0 r*cos(kx) + (s+A*r/k) * sin(kx) , x > 0 .

Clearly r and s are arbitrary and must be obtained from initial conditions or boundary conditions, etc. (That is, a 2nd order equation has 2 arbitrary constants.)

For more general cases you will have to actually integrate up to the bad spots to get the jump conditions, then continue integrating on the other side. You can't really expect a canned algorithm to know where to join the solutions, so you will have to cook it up yourself. Let me somewhat vaingloriously recommend my own lecture notes on ode's as a starting point: look on the page

formatting link
look at the lecture notes page, and download the ode notes.

Reply to
Julian V. Noble

Simulink can do this. The blocks with discontinuity have a 'zero crossing detection' function that finds the location of the discontinuity and then integrates on either side of it.

Don't know your problem but it may be a typical 'state event' type of thing where you need a different structure before and after the event. Simulink's enabled subsystems handles this nicely. Take at look at the clutch demo.

dave y.

Reply to
dave y.

I was hoping that there was some general-purpose way of doing this, basically an algorithm that lets you tell it that it's crossed a discontinuity, or approximately where the discontinuity is in state space so it can integrate up to it then tunnel over to the other side. Since I'm expecting to be doing this sort of thing a _lot_ I can't use hand methods.

It sounds from other posts like I may have to spring for Simulink (sigh).

Reply to
Tim Wescott

Look here:

formatting link
A robust procedure for discontinuity handling in continuous system simulation, Transactions of the Society for Computer Simulation International, Volume 2 , Issue 3 (September 1985) L. G. Birta, T. I. Oren, D. L. Kettenis

There are more papers published after 1985, published (as I recall) in\ ACM Transactions on Mathematical Software, ACM Transactions on Modeling and Computer Simulation, Artificial Intelligence, and Simulation. Unfortunately, I cannot find references at this moment.

The issue is briefly discussed in the book "Continuous System Modeling" By Cellier, section 2.8, "Discontinuity handling".

The most up-to-date presentation of continuous-discrete system simulation is in the book "Theory of Modeling and Simulation: Integrating Discrete Event and Continuous Complex Dynamic Systems" by Zeigler, Praehofer and Kim. I recommend also Praehofer's Ph.D. thesis. It augments the book. However, the dissertation ("System theoretic foundations for combined discrete-continuous system simulation") is not available on line, you have to ask Dr. Praehofer

formatting link
On historical note:

Complete discrete-continuous simulation system is described in the book "The GASP IV Simulation Language" by Alan Pritsker. Discontinuities are handled by using the "event function" and locating discontinuities by solving algebraic equation. Unfortunately, all is in Fortran IV, and publication date is 1974. The book is old, but the methodology described there is still in use.

Detailed algorithm for modeling dynamical systems with discontinuities is presented in even older book "Computer Simulation of Dynamical Systems" by Kochenburger, chapter 6, Simulation of Discontinuous Relations. Publication date is 1972. Half of the book is about analog computers, but detailed algorithms for computer simulation are also there.

A.L.

Reply to
Andrzej Lewandowski

Why not use the Laplace transform instead? It is, of course, equivalent but a more straightforward way to obtain the solution.

That should be k^2.

Using Mathematica, we take the Laplace transform of the differential equation (generalized to include a shifted delta function)

LaplaceTransform[y''[x]+(k^2 - A DiracDelta[x-a]) y[x] == 0, x, s] // Simplify

to get

(k^2+s^2) LaplaceTransform[y[x], x, s] == s y[0]+A y[a] E^(-a s)+y'[0]

Then algebraically solve (which is trivial),

Solve[%, LaplaceTransform[y[x], x, s]];

and take the inverse transform (this is the difficult step for more complicated differential equations -- there are, however, arbitrary precision methods useful for computing numerical inverses),

InverseLaplaceTransform[LaplaceTransform[y[x],x,s] /. First[%], s, x];

Putting a->0 and collecting terms

Collect[% /. a -> 0, {Cos[_], Sin[_]}, Simplify]

we get

Cos[k x] y[0] + Sin[k x] (A UnitStep[x] y[0] + y'[0])/k

which is equivalent to the solution you give.

Cheers, Paul

Reply to
Paul Abbott

If you really didn't want to buy any more software, you could use runge-kutta

formatting link
may get messy, but you can definitely use discontinuous values here as well. Note this is not a variable step length integrator.

tim

Reply to
Tim Franklin

You can use RK for equations with discontinuous right-hand-side but you should not. For discussion why, see the book "Solving Ordinary Differential Equations I" by Hairer, Norsett and Wanner, Section II.5 "Further Questions of practical computation - Discontinuous equations". Some numerical experiments with RK and discontinuous equations are presented, with conclusions: "... the results are quite chaotic. In general, one looses one or two digits of precision and the number of steps is very large".

A.L.

Reply to
Andrzej Lewandowski

Greetings All,

At this point, it may be good to discuss discontinuities in continuous behavior in a little more detail as there are different aspects addressed throughout this thread.

Typically the occurrences of discontinuous changes are modeled by 'zero-crossing functions'. When such a function changes its sign, some discontinuity may be effected. For numerical solvers, this is important information, as they tend to rely on continuity assumptions, and so any discontinuous behavior (even if the discontinuity occurs in a higher order derivative with respect to time) may degrade their performance.

The following needs to be considered (I will refrain from going into the details, they can be found along with references in

formatting link

  1. Event detection: The sign change of the zero-crossing function is evaluated at integration points. If the function has an even number of zeros in the interval between two integration points, you would miss it.

  1. Event location: Once you know a sign change occurs between two points in time, you may want to locate exactly where the zero crossing occurs (note that there are applications where you rather not do this).

After these two activities, you can safely make the change to your model and restart your numerical integration. This brings about another issue:

  1. Reinitialization: After you made the discontinuous change to your model, consistent initial conditions to continue simulating continuous time behavior have to be found. This is the topic of what is discussed in the quoted material in this missive.

The reinitialization can be implicit or explicit. For example, in case of a bouncing ball, you would reinitialize its velocity when it hits the floor by reverting its velocity upon impact, possibly accounting for a coefficient of restitution (< 1).

In other cases, the consistent initial conditions that you need are more difficult to obtain as they are implicit in the new model structure (this happens for differential and algebraic equations [DAEs] of 'high index'). For example, if you close a switch (modeled as ideal) between two capacitors connected in parallel to ground, there is an instantaneous redistribution of their charge to make sure they have the same voltage drop.

The new charge distribution can actually be computed from the system of equations such that conservation of charge is maintained. There is a fairly extensive body of literature on this topic. A paper I like myself is

@ARTICLE{verghese81, author = "George C. Verghese and Bernard C. L{'{e}}vy and Thomas Kailath", title = "A Generalized State-Space for Singular Systems", journal = "IEEE Transactions on Automatic Control", volume = 26, number = 4, month = aug, pages = "811--831", year = 1981, }

In the past, I have used a numerical routine by Andras Varga to compute the consistent initial conditions by deriving a pseudo-Weierstrass form first. That worked well.

Note that just a minimum norm projection will, in general, result in physically incorrect values (i.e., you may generate charge or momentum or any other extensive quantity)

  1. Reduced dynamics: Once you have computed consistent initial conditions, you have to make sure you keep generating consistent values. This is not trivial in case your 'generalized state space' is of higher dimension than your actual state space. In this case you live in a subspace and you want to make sure that you do not leave this subspace during numerical integration of the continuous time behavior.

There is quite some literature on this topic as well. An often cited algorithm was presented by Pantelides in

@ARTICLE{pantelides88, author = "Constantinos C. Pantelides", title = "The Consistent Initialization of Differential-Algebraic Systems", journal = "SIAM Journal of Scientific and Statistical Computing", volume = 9, number = 2, pages = "213--231", month = mar, year = 1988, }

  1. Sequences of mode changes: Now you can also have a number of zero-crossing functions changing their sign as a result of the newly computed state variable values. This causes a sequence of state changes, with their corresponding complications. Let me just refer to a paper I wrote
    formatting link
    which gives a nice overview.

Anyway, I hope all this clarifies the complexity of simulating hybrid dynamic systems a bit, or, at least, helps in structuring whatever discussion is to follow.

Have a great weekend, y'all!

Pieter

Reply to
Pieter Mosterman

Yes.

The main problem with Laplace transforms is that they only work for cases with constant coefficients (or sometimes when the coefficients are simple functions). They are not exactly ideal for the general case. Thus, e.g., if the original equation had been

y'' + ( k^2 - A*\delta (x) -u(x) ) * y = 0

where u(x) is some continuous function, you would have a problem.

Moreover, I placed the discontinuity at x=0, assuming one wanted the solution for both x>0 and x

Reply to
Julian V. Noble

Tim,

That is my experience with Simulink, too. It just stops dead.

You are right on the money on the way to handle the discontinuities. There are two commercial codes that implement these methods that I use: AMESim ("discontinuity handling") and EASY5 ("switch states"). They interpolate to solve for the intersection and then do the bounce, so to speak, switching the velocity direction rather than shrinking the step size down to nothing and turning the trajectory around in a real tight continuous curve. I like the switch states terminology personally. The infinitely stiff concept is discussed in the classic text (it says so on the back!) on DAE - differential algebraic equations. It is the opposite limiting behavior from what you are asking about -- discontinuities. You need DASSL or DASPK or RADAU and the switch states to go fast through the bumps. Simulink doesn't have the switch states, so it goes down to machine precision time steps and unless you're God ("The mountains flowed before the Lord" - the Prophetess Deborah) it seems like a long time to wait.

William Gear and Linda Petzold are the names I remember off hand for searching. Search on all the all caps words, for that matter. I think you can download white papers on the switch states or discontinuity handling from the respective websites for more details (and corrections, I am sure).

Todd

Reply to
Todd Rose

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.