PID autotuning - not working for heating application



Peter, Okay, I will post that experiment but it's not as clean. Since I only had shared access to the prototype I couldn't let the machine cool down long enough for a real restart, and (of course) the room temperature changed. These thermal systems have really long "tails"; some sections (plastic) absorb heat and let it out very slowly.
Ray
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Peter, Okay, I will post that experiment but it's not as clean. Since I only had shared access to the prototype I couldn't let the machine cool down long enough for a real restart, and (of course) the room temperature changed. These thermal systems have really long "tails"; some sections (plastic) absorb heat and let it out very slowly.
Ray
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Well I looked around, while I do have SIMO heater by heater data the subject heater input is random trying to obtain information the sys-id routines like. Incidentally: In case I forget; some of the data was taken has a problem which I found out after much work and threatening to sue the programmers; the PWM percentages were rounded down to units not tenths and such. That's the reason for the second set of PWM data. Maybe I should have quit when they separated the programming from engineering (: Endeavour to write and check your own control and monitoring algorithms; you will have a happier life.
Ray
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Well I looked around, while I do have SIMO heater by heater data the subject heater input is random trying to obtain information the sys-id routines like. Incidentally: In case I forget; some of the data was taken has a problem which I found out after much work and threatening to sue the programmers; the PWM percentages were rounded down to units not tenths and such. That's the reason for the second set of PWM data. Maybe I should have quit when they separated the programming from engineering (: Endeavour to write and check your own control and monitoring algorithms; you will have a happier life.
Ray
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

As I said above, the quality of the data is very important. The initial state must be known and usually that is to ensure the system is at a steady state where the derivatives are 0. It is too bad the data got truncated too. I work with motion control systems. It is easy to make sure the system is stopped before getting into excitation procedure. If the ambient temperature or C changes during the test then that must be recorded too. Then it isn't a parameter to be determined.
Are the time periods seconds? If so then time constants are long. The long time constants make it hard to find the difference between small changes in estimated time constants. The optimizing routines calculate a sum of squared error. I like to think of the SSE as the elevation in a multiple dimension terrain and the optimizer is seeking the valley floor or pit. The slope of the SSE is checked in all dimensions, one for each parameter being optimized. If the slopes are very flat it is hard for the optimizing routine to get to a final best position. The truncating data will make this almost impossible because small difference make a big difference on a flat 'desert'
We have a motor that we put a relatively heavy disk on. The weight increases the time constant to about 1 minutes which is forever in a motion control system. This system doesn't ID with a consistent set of numbers but all seem to work. It appear that there are some steep hills but once the SSE gets off the steep hills the valley is more like a flat 'desert'. The long time constants do make finding the lowest part in the 'desert' difficult. However, any solution on the desert floor seems to work equally well as the elevations are all the same but the question is will they work equally well under all motion conditions. This is odd but true. We have put a lot of effort in to exciting the motor so that the SSE 'desert floor' has more of a slope.
Motors with short time constants are ID with more consistent parameters.
Ray, you had the cards stacked against you, but your main problem was with the data, not optim() or lsqrsolve(). The superimposed method would work. The MIMO problem was not the biggest problem you had.
Hopefully this explains why auto tuning sometimes doesn't always work.
Peter Nachtwey
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

I am sure you don't want to get into this area but.... For a multi-state SISO system with poles on the negative real axis there is a mathematical theory that is guaranteed to produce a convergent series of models; I think it was extended to complex poles as well. It is based up Muntz polynomials being converted to an orthogonal polynomial series; really posynomials. Good news and bad news from this theory. Bad news: almost any sequence of poles will work. Good news: almost any sequence of poles will work. That means that you can guide the process and make mistakes; the mistakes will be washed out later. But the result, while accurate, will not necessarily be physically meaningful. It also implicitly means that there are an infinite number of perfectly accurate models; which might throw optimization procedures off a little. To summarize you can estimate the two dominant poles, then by subtracting them out of the data in a precise manner find another pole, use that in a precise manner, and so on; the "precise manner" is the generation of the orthogonal polynomial sequence. The most obvious application would be to use the impulse response; but I think I see how to extend it to arbitrary inputs. I have been thinking about applying it to the standard sys-id process to make successive approximations converge; get better and better as the order increases. I haven't actually managed to get insight on how to do this for MIMO approximations.
Enough said Ray
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Peter, Okay, I will post that experiment but it's not as clean. Since I only had shared access to the prototype I couldn't let the machine cool down long enough for a real restart, and (of course) the room temperature changed. These thermal systems have really long "tails"; some sections (plastic) absorb heat and let it out very slowly.
Ray
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Why not use the principle of superimposition. Test each heater with respect to each sensor and then find the FOPDT or SOPDT coefficients that work For the first temperature sensor you have a FOPDT formula that looks like t1'=A1*t1+B11*u1(t-dt11)+B12*u2(t-dt12)+B13*u3(t-dt13)+C Where: t1 is the temperature a sensor 1 A1 is the system time constant at temperature sensor 1. This is basically exp(-t/tau1). B11 is the input coupling of heater 1 to sensor 1. B12 is the input coupling of heater 2 to sensor 1. B13 is the input coupling of heater 3 to sensor 1. u1(t-dt11) is the heater 1 signal for time t. dt11 is the dead time from heater 1 to sensor 1. C is the ambient temperature. It had better be the same for all test unless the ambient temperature is really changing. It is easy to ID B11 B12 and B13 if they are turned on 1 at a time but the starting point should be ambient temperature or some steady state. When done you would have this t1'=A1*t1+B11*u1(t-dt11)+B12*u2(t-dt12)+B13*u3(t-dt13)+C t2'=A2*t2+B21*u1(t-dt21)+B22*u2(t-dt22)+B23*u3(t-dt23)+C t3'=A3*t3+B31*u1(t-dt31)+B32*u2(t-dt32)+B33*u3(t-dt33)+C
All the coefficient could probably be ID at once but then it would be much harder to get exact values. It is best to do small sections at a time and rely on superimposition.
The way I ID a system is like this http://www.deltamotion.com/peter/PDF/Mathcad%20-%20Sysid%20SOPDT.pdf
1. On page 1/10 I define the ideal SOPDT system. I chose different value to to see how the well the system identification works under different conditions Notice that there is dead time and I don't assume all the poles are at the same location like others on this newsgroup. 2. At the bottom of page 2/10 I generate the test data that is later to be used for system identification. I add noise the to ideal data just to simulate reality a bit. The CO(t) function is a few steps. The function can be arbitrary but I have found that the excitation is critical to the identification. Dead times and time constants are determined more accurately if the are step or rapid changes. The gain and ambient coefficients are determined more accurate if the are steps at different levels. 3. One page 3/10 I plot and save the generated test data. I can post it on my FTP site for you to practice with. Notice that this data has dead time and two poles that aren't at the same location. I could have added more noise but the quasi-Newton method seems to filter it out well. 4. One page 4/10 the system identification is done. Mathcad's Minerr function can be like either Scilab's optim() function or lsqrsolve function depending on the option chosen. I chose the quasi-Newton optimization which is similar to the optim() function. Runge-Kutta is used to integrate the differential equation. The differential equation doesn't need to be linear. I could easily put a none linear term in there like one that changes the gain as a function of temperature. This happens with heat exchangers because of the LMTD. Fluid systems are often of the form v'=g/m-K*v^2. It is easy to ID non linear system IF you know the general form of the equation and just need to ID the constants. Notice that the ID'd poles are closer together than the real poles. I have notice that system identification tends to ID the poles closer together than what they really are. Notice that I all ID a dead time and an ambient temperature. This is something that JCH does not do. At the bottom of the MSE(), mean squared error function, is where I calculate the mean squared error between the estimated temperature and the actual or test data temperature. The Minerr function adjusts Kp,t1,t2,thetap, and C till the MSE is minimized. You can see the results are not perfect but that is reality. 5. On page 5/10 the actual or perfect response is compare to the estimated response. The response looks close, almost identical, even though the system identification puts the estimated poles closer together. Also notice that a good system identification routine can ID systems that are excited by more than just a step change. In fact they must must be able to do system identification with arbitrary excitation. Above I said the excitation is the key to doing system identification. One key is the make multiple steps at different levels. This is very important in computing the gain and computing the gain when it isn't linear. Heat exchanger's gain changes because of LMTD. ( log mean temperature difference ). 6. On page 6/10 PID gains are calculated using the estimated plant parameters found by system identification. My formula is a little more complex that the IMC formulas but the response is faster/better for the same closed loop time constant. I doubt the extra complexity in the formula is worth the effort for most applications. 7 Page 7/10 simulates the PID control of the original system using the gains calculated from the system identification. Notice that feed back noise is simulated as well as the dead time. 8. Page 8/10. The simulation show the response. The response isn't perfect because there was noise in the original data used to do the system identification. The system identification is not perfect because the poles are closer together than they should be and I simulated noise on the feedback but this is closer to reality. 9 Page 9/10 uses the internal model gain formulas that I got from the www.controguru.com site. They work well too and are much simpler they don't work quite as mine. I should have provided a IAE value for my gains and the IMC gains for comparison. 10 page 10/10 shows the IMC response is a little slower but most would be please with it.
I would use the above technique one at time with each heater and temperature sensor. Actually one can excite each of the heaters one at a time but the data for the 3 temperature sensors at the same time.
I posted a link to a scilab program that does the same thing many years ago but no one seemed interested.
JCH, you should copy this so your program can handle dead times, arbitrary inputs, and poles that are not all at the same place. What you appear to be missing the quasi-Newton code( BFGS) or Levenberg- Marquardt code that allows you to do proper optimization. I bet you use a grid search.
Peter Nachtwey
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Peter, Sorry I didn't answer earlier; I was answering JCH and putting the data up. I considered the route you illustrated, and perhaps I should have tried harder. But in your example in the above text you implicitly assumed (by writing the equation that way) that a good description was accessible through a single state variable per heater/ sensor, and I ran into intellectual problems trying to have the flexibility for extension. I did read your earlier posting and will reread it. The problem I had was this: Suppose the correct set of terms for sensor one was: x'=Ax+Bu y=Cx+Du Where u is heater power, y is the sensor readings, and x is the internal state vector larger than u or y . Now a set of individual SISO readings and using supposition would result in individual state vectors xi and Ai,Bi,Ci,Di . Some the state vectors might be shared between the individual responses and some not. How do you determine which ones are shared or not? I am sure I can make up a circuit with discrete components that would illustrate this. This interpretation problem could be resolvable, but I didn't see how to do it. If I am missing your point please post an example with more state variables.
It does seem as though some generalization of Thevin's equivalence circuit theorem might be possible and applicable. Maybe using Telegin's theorem? But these thoughts are just meanderings of my mind.
Ray
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

They are all independent but the results at the temperature sensors will be from the sum of the 3 heaters. This should hold true unless there is something that I don't know where superimposition doesn't apply. The states for a system of SOPDT equations would simply have the temperature and the temperature rate at each of the 3 temperature sensors. I don't see how the temperature at one sensor will affect another temperature since the temperature sensors are not heat sources.
Peter Nachtwey
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
clip....

Your right; and I see how the dependent/independent has to be handled for state-space representation. A state space reduction to a non- singular matrix is required to make A nonsingular in the ABCD representation.

As a note: In more complicated systems you need additional terms on the left. m1*t1''' + n1*t1'' + p1*t1' m2*t2''' + n2*t2'' + p2*t2'
Ray
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.