PID autotuning - not working for heating application

Is there a control engineering expert here?

I could us a bit of help on how to implement a PID autotune function for a heating application (a small boiler).

My current PID autotune function produces no reliable results. I use the relay feedback method (Åstrom and Hägglund) but it seems that the ultimate period Tu - one of the paramters determined with the relay feedback test - is directly correlated to the relay output step u. u is an arbitrary value which makes Tu an arbitary value. Since Tu is required to compute Ti and Td (e.g. Ti = 0.5 Tu = Ziegler-Nichols), autotune is not possible.

I believe this is because the machine heats very quickly (1500W boiler) and cools down very slowly, so the process value isn't a sinusoid. Here is an example graphics, green line is setpoint, red plot is process value and the grey vertical bars represent heat output u:

formatting link
Regardless of u, the temperature always dips the same, small amount below setpoint - because the machine cools slowly, the temperature can not fall far below setpoint before it's reacting to the heat. It then shoots up by an amount that is proportional to u. As a result, the plot resembles mountains above setpoint. If u is large, temperature will shoot high above the setpoint and take very long to cool down. Big mountains, so Tu gets large, Tu ~ u = my problem.

Since all PID temperature controllers have Autotune, there must be a solution for this problem. Any ideas?

Reply to
Frank W.
Loading thread data ...

As you probably know from control theory, the basic theory of a PID controller is that you have a system described by a set of linear differential equations that is inherently unstable or has some performance problems. As a result you strap a PID controller onto it (with said controller also described by its own linear differential equations), and the resulting system (now described by linear differential equations which are a mathematical mix of the underlying system and the PID controller) has better characteristics.

Did you notice that there is a word that appears many times in my description above?

Want to guess what the word is?

That word is "linear".

A system with a time delay is not described by linear differential equations. Strapping a PID controller onto it is bad math.

One of the more classic examples is a shower or an industrial process that mixes fluids of varying temperature and the sensor is located substantially downstream from the mixing value. This is a pure time delay. My shower at home is like that. I turn the water a little hotter. Nothing happens. I turn it a little more hotter. Nothing happens. Then I turn it a little more hotter. Then the wave of hot liquid hits me and I scream in agony.

Over time, I've adapted to my shower. I don't burn myself anymore.

I think the control algorithms you want to use for a system like yours fall outside the range of PID. I'm sure there is a body of theory that covers it, but I don't know what that is.

I would heat the system full bore for a fixed period of time, then stop and wait to see how the temperature catches up. And work from there.

The best control strategy for that system isn't going to be PID. That is a non-linear system.

Datesfat

Reply to
Datesfat Chicks

I've been resisting forking this over into the control newsgroup: now it's compelling.

Systems with delay can be perfectly linear, as well as time invariant -- they just can't be described by ordinary differential equations with a finite number of states.

To be linear, a system only needs to satisfy the superposition property. A delay element satisfies superposition just fine.

And while a PID controller may not be the theoretically best controller for a system with delay, in many cases it's not a bad choice at all. PID controllers can and will give perfectly satisfactory service with plants that have significant delay. The thousands, if not millions, of PID controllers in mills and factories around the world that are controlling plants whose responses are dominated by delay certainly belie any declaration that the PID controller isn't a good choice to control a plant with delay.

None of the above is intended to minimize the difficulty in analyzing and designing a truly optimal controller for a plant with pure delay -- that's an exercise that can make your brain hurt, and fast. And nothing of the above is intended to chase you away from taking plant delays more directly into account if a discrete-state controller such as a PID won't let you eke the performance that you need out of your plant.

But in the absence of significant nonlinearities or time varying behavior you can use all the analysis tools that are suitable for linear time invariant systems on a system with delays just fine. You can do good design work, without ever having to explicitly write out the differential equations, much less solving them.

So if you don't want to get lost in Mathemagic Land searching for performance that isn't necessary for your product's success, a good ol' PID controller may be exactly the optimal controller -- in terms of adequate performance and reasonable engineering time -- even if it doesn't satisfy any egghead academic measure of "optimal" for the particular plant you're trying to control.

Reply to
Tim Wescott

Hi Tim,

I might have missed something significant here.

It is my assumption that a system with a pure time delay is inherently non-linear.

Let's take my shower example with a pure delay in the pipes ...

With no delay, you can just say that

Temperature(t) = Valve_Position

or perhaps with a little thermal mass thrown in you can say that:

d Temperature / dt = K * (Valve_Position - Temperature)

where of course I'm assuming that valve position and water temperature are the same thing.

The first is I think a 0'th order linear differential equation and the second is a 1st-order LDE.

But how would you linearize a system with a pure time delay, exactly?

The shower example with a pure pipe delay between the shower valve and my skin is fine.

Thanks, Datesfat

Reply to
Datesfat Chicks

Superposition is sufficient proof of linearity. What comes out of a pipe (assuming that there is no mixing in transit) is almost a delayed linear superopsition of what is pushed into it, but it is not linear because it is not a pure delay. When the input velocity increases because both hot and cold water are flowing, the delay time decreases. Superposition doesn't strictly apply because the time to look isn't well defined.

Any delay pushes a servo system toward unstable. That's not a linearity problem.

There's also the time it takes the valve to move.

It's already linear. Just nasty.

But, as I wrote above, a pipe is onlt a pure delay as long as the flow is constant.

Jerry

Reply to
Jerry Avins

Well, if it's already linear you don't linearize it.

Take the system y = h(x, t) ==> y(t) = x(t - td). Testing this with superposition we get

y1(t) = x1(t - td), y2(t) = x2(t - td),

y1(t) + y2(t) = x1(t - td) + x2(t - td)

which is both h(x1, t) + h(x2, t) and h(x1 + x2, t) -- therefore the system is linear.

Note that as Jerry points out a shower isn't necessarily a linear system, unless your shower valve insures a constant flow and the pipes don't have any turbulence. Let a vastly simplified version be

y(t) = x(t - kd * x(t)),

(this doesn't capture the delay behavior in even a perfect pipe)

Then we try superposition:

y1(t) = x1(t - kd * x1(t)), y2(t) = x2(t - kd * x2(t)).

This does _not_ equal the system output to the sum:

ys(t) = x1(t - kd * (x1(t) + x2(t))) + x2(t - kd * (x1(t) + x2(t))).

so this system isn't linear -- but not for the reason that you thought.

Reply to
Tim Wescott

On Nov 10, 11:54=A0am, Tim Wescott wrote:

I have recently done a thermal MIMO PID controller that ended up preforming adequately despite using very simple controls. Some comments: Even the simplest differential description ends up with an infinite number of state/poles. Most real thermal systems have little tabs and things that foul up theoretical analysis. Therefore: you can start with simple mathematical models to estimate requirements but you always end up with approximations. Pole zero analysis in this case is almost worthless except to roughly get started. Bode and/or Nichol's chart analysis (I used both) works very well; but .. You have to get and use the experimental data. You can use that directly or find a sufficiently good model for the system. You should establish a "process" for the tuning and experiments; the system you take the data on will undoubtedly not be the one that ends up being manufactured. Gotcha's: Scilab's system identification processes are unstable dealing with this type of system. They can be used to attempt modelling but tread carefully and double check. When taking the data, the room/environmental temperature will do everything it can to confound the experiment. Don't worry about the lower frequencies, go to where the phase starts to shift significantly. For the Bode/Nichols derived compensation just redo the experiment (which you probably will) to clarify the standard compensation region round the Bode criterion; 180 degrees +- one or two decades. Try to give at least hints to how the tuning was done for the "outsourced" maintenance people who have to maintain the tuning after the mechanical assembly is altered; unless you want to come back and start over yourself in a year.

Really, really examine the code to make sure you don't "windup". I was forced to rely on programmers in another group and I had study the experimental results for a while to realize that the anti-windup code just clipped the output not the integrator.

Ray

Reply to
RRogers

I agree with the last paragraph. However, I have had a lot of success with identifying systems poles and zero. I can then place both where I want with the controller gains.

I didn't know Scilab has a system identification function, but I have used the lsqrsolve and optim successfully.

Peter Nachtwey

Reply to
pnachtwey

Interesting, I have thought about going that route but opted for a more conventional process; System Identification routines. But that wasn't very satisfactory. I have a problem in that I like to continue along routes until I really understand why they don't work. Sometimes I think that half my brain is autistic. Once I get my system identification code reorganized (with or without a gui) I plan to test it against my data and some available test cases from NICONET. Although they don't seem to be MIMO. In biological testing equipment you are forced into MIMO situations in order get the required temperature accuracy over large testing areas and environmental conditions. In addition mammalian reactions are tuned to constant temperature within a narrow band; 37degC in our case (presuming no aliens in the group). I was actually looking forward to doing that; I had never had use MIMO before. Wasn't so enthused after a while; the design process is a lot more complicated and the tools were not robust. Once I resolve (or at least identify) the problems perhaps I will compare the results with lsqrsolve. If your interested I will post a link here; but don't expect anything soon. I am just settling into Mexico, and am not as fast as I used to be.

Ray

Reply to
RRogers

When you have MIMO test data why don't you share it with us. I would like to have a crack at too. It would be helpful to know what I am fitting data too though so I can get the general form the equations right. I don't know anything about your field of study.

The trick is how you use optim() and lsqrsolve(). The best system identification uses Runge-Kutta to integrate the model's system of differential equations.

For MIMO systems you will need to use optim(). optim() can optimize a cost function. lsqrsolve() requires two arrays of data, the actual data and the estimated data. I don't know how you would do this if you have two sets of actual data and two sets of estimated data.

Peter Nachtwey

Reply to
pnachtwey

I don't quite understand your approach; it seems different from what I had in mind. I have multiple sets of experimental data consisting of three stimulus/drive columns and three columns of resulting temerature data; together with a multitude of other columns of other temperature readings for thermal design of the overall assembly. My hypothetical approach to raw curve fitting type of modeling: Write out the ABCD equations with unknown coefficients and try to find the coefficients; which are linear (superficially) coefficients applied to the data. Having an adequate model in hand, then I thought I would use optim() to find the control gains in the closed loop. This is not what you are describing. My formulation was just a passing thought and certainly has a lot of problems I haven't resolved. Your comments don't fall in line with this, so why not tell me yours.

Brief technical details follow (of interest only to those who enjoy these things): The system consists of three heaters and three sensors; actually far more sensors for the data, but the others were temporary and informational for the rest of the machine and not used in control. The system consists of a disk holding something like 20 test strips and rotating the strips under a dispenser and then under an optical head; so each of the test strips rotated to have a drop of sample deposited and then put under the optical head to monitor the reaction development. One of the heating systems was a buffer to isolate the test disk from the room. The other two are more precise and localized controls that control the sample tray fairly precisely to 37 degC. The reason for two heaters: one controls most of the circular sample disk consisting of 20 or so test strips that have been entered; the other heater brings the incoming test strips up to temperature from the room temperature when they are inserted. The original specs were that the samples had to be at 37degC +- .1degC when the reaction was occuring, warmup in 5 minutes, ambient/room temperature 18degC to

30degC. I designed the control system to be .02 degC accurate at the tray thermistor, control loop closure at power up inside of two minutes, PID controls around the principal MIMO directions (the thermistors were placed reasonably close to the individual heaters). The last part was to make the programming (done by another group) and maintenance easier; requiring less skilled people and the end performance was adequate. The problems involved were: 1) I couldn't put the thermistors where I actually wanted them, 2) I couldn't be hyper conservative and truly insulate the assembly ( the mechanical people had more than enough problems) so I had to rely on chunks of metal smoothing out the spatial frequencies. Of course the assembly had variations across it anyway. 3) I never had the final machine available during testing because the mechanical people needed to know about thermal problems before the design was finished, and I didn't want to be the person holding up release after the machine was finished. The was only a problem during testing since one set of readings would be different from a set taken later. 4) The sys-id routines were not robust and had to be watched very carefully. In fact I ended up using the DC gains of the models as the first quality determiner. Then I would look at various residuals to determine the real quality. Usually the test data was split in half (or so) so the model wouldn't be just regurgitating data back to me. The first half was used to determine a model and then the model was used to predict the second half; the resulting residual time series were then examined. I wanted the residuals to be below .1 degC (1 part out of 370) or so but never got there due to inadequacies in the model, and I had to settle for 2 degC; the slack/error was taken up when the loops were closed. Apparently the sys-id routines want random inputs; whereas people are more comfortable with large step inputs. I have both types of data. 5) All of the heater systems talk to each other and the environment thermally; the reason for MIMO approach. 6) Severe organizational problems with people who had never done instrument design before (: That's a different story.

What is driven home is the fact that you are just looking for an adequate model of reality in thermal situations; not looking for "truth". The mechanical assembly can not reduced to anything less than a FEA analysis; which I couldn't get the department to institute. It's not a trivial thing to incorporate in a design process. Having done a partial survey I think COMSOL is a pretty good multiphysics tool and does have the ability to incorporate spice models between objects like a thermistor (actually a point) and a heater.

And so on, I have more information. None of this relates to any proprietary information; except if I come up with a better process I can answer questions from the engineer who has to redo the system after they make changes to the mechanical design. The design changes are inevitable and occasionally people get back to me with questions.

If you really want some data I can post it on an FTP sight. The project is done and I am retired so there is no hurry. The data is not clean and has a lot of confounding disturbances; OTOH there is a lot of it :) I am still interested in determining a better process for establishing good models; although I am inclined to fix up the sys- id functions so that higher order approximations don't lead to (wildly) worse and worse predictions. That is just nonsense. Be aware that my criteria are DC gain and residuals; and any comment on the modelling will probably be oriented around that. If your interested in my code; my SCILAB program does produce a lot of outputs, BODE and Nichols charts; but is not finished code in the sense that some parameters are done with I/O, and some parameters are entries in the code. There are shortcomings, I never did a good Bode plot of the raw data, just of the models. I kept meaning to but that requires a lot of filtering to be meaningful.

Hope I haven't bored you to much Peter.

Ray

Reply to
RRogers

See simple example with differential equation of order 2:

  • formatting link
    I try to find the best possible process transfer function (page 1) by using approximation methods on the basis of some measured values (page 2).

Thereafter I have a benchmark test scheme (page 3) with a program (page 4) that automatically finds the best PID parameters using the IAE criteria.

This could be done for process identifications up to differential equations of degree 6.

Reply to
JCH

clip..........

Okay I have uploaded the file that corresponds to step inputs. This one is fairly clean.

formatting link
get you there. If there is a permission problem let me know; I will resolve.

The .jpg is a graph to get the idea. T-11 is included to verify the environment didn't change much. The .xls is: sheet 1 graphs, sheet static-test is the long experimental run covering about 4 hours Cols: T-1,2,3 are the three direct thermistors used later for control Cols: M,N,O are the PWM drives, 0-100%, to the corresponding heaters; the trailing columns can be ignored The intermediate columns are various sensors distributed away from the actively controled points.

Let me know and I (or you ) can cross-verify your model against other experimental runs.

I have other experimental data sets that are less clear; some are basically random inputs to try to satisfy the sys-id programs.

Ray

Reply to
RRogers

"RRogers" schrieb im Newsbeitrag news: snipped-for-privacy@x5g2000prf.googlegroups.com...

Basically refering to

  • formatting link
    Can you approach the best possible ODE (process transfer function) in a range of order
Reply to
JCH

Why not use the principle of superimposition. Test each heater with respect to each sensor and then find the FOPDT or SOPDT coefficients that work For the first temperature sensor you have a FOPDT formula that looks like t1'=3DA1*t1+B11*u1(t-dt11)+B12*u2(t-dt12)+B13*u3(t-dt13)+C Where: t1 is the temperature a sensor 1 A1 is the system time constant at temperature sensor 1. This is basically exp(-t/tau1). B11 is the input coupling of heater 1 to sensor 1. B12 is the input coupling of heater 2 to sensor 1. B13 is the input coupling of heater 3 to sensor 1. u1(t-dt11) is the heater 1 signal for time t. dt11 is the dead time from heater 1 to sensor 1. C is the ambient temperature. It had better be the same for all test unless the ambient temperature is really changing. It is easy to ID B11 B12 and B13 if they are turned on 1 at a time but the starting point should be ambient temperature or some steady state. When done you would have this t1'=3DA1*t1+B11*u1(t-dt11)+B12*u2(t-dt12)+B13*u3(t-dt13)+C t2'=3DA2*t2+B21*u1(t-dt21)+B22*u2(t-dt22)+B23*u3(t-dt23)+C t3'=3DA3*t3+B31*u1(t-dt31)+B32*u2(t-dt32)+B33*u3(t-dt33)+C

All the coefficient could probably be ID at once but then it would be much harder to get exact values. It is best to do small sections at a time and rely on superimposition.

The way I ID a system is like this

formatting link

  1. On page 1/10 I define the ideal SOPDT system. I chose different value to to see how the well the system identification works under different conditions Notice that there is dead time and I don't assume all the poles are at the same location like others on this newsgroup.
  2. At the bottom of page 2/10 I generate the test data that is later to be used for system identification. I add noise the to ideal data just to simulate reality a bit. The CO(t) function is a few steps. The function can be arbitrary but I have found that the excitation is critical to the identification. Dead times and time constants are determined more accurately if the are step or rapid changes. The gain and ambient coefficients are determined more accurate if the are steps at different levels.
  3. One page 3/10 I plot and save the generated test data. I can post it on my FTP site for you to practice with. Notice that this data has dead time and two poles that aren't at the same location. I could have added more noise but the quasi-Newton method seems to filter it out well.
  4. One page 4/10 the system identification is done. Mathcad's Minerr function can be like either Scilab's optim() function or lsqrsolve function depending on the option chosen. I chose the quasi-Newton optimization which is similar to the optim() function. Runge-Kutta is used to integrate the differential equation. The differential equation doesn't need to be linear. I could easily put a none linear term in there like one that changes the gain as a function of temperature. This happens with heat exchangers because of the LMTD. Fluid systems are often of the form v'=3Dg/m-K*v^2. It is easy to ID non linear system IF you know the general form of the equation and just need to ID the constants. Notice that the ID'd poles are closer together than the real poles. I have notice that system identification tends to ID the poles closer together than what they really are. Notice that I all ID a dead time and an ambient temperature. This is something that JCH does not do. At the bottom of the MSE(), mean squared error function, is where I calculate the mean squared error between the estimated temperature and the actual or test data temperature. The Minerr function adjusts Kp,t1,t2,thetap, and C till the MSE is minimized. You can see the results are not perfect but that is reality.
  5. On page 5/10 the actual or perfect response is compare to the estimated response. The response looks close, almost identical, even though the system identification puts the estimated poles closer together. Also notice that a good system identification routine can ID systems that are excited by more than just a step change. In fact they must must be able to do system identification with arbitrary excitation. Above I said the excitation is the key to doing system identification. One key is the make multiple steps at different levels. This is very important in computing the gain and computing the gain when it isn't linear. Heat exchanger's gain changes because of LMTD. ( log mean temperature difference ).
  6. On page 6/10 PID gains are calculated using the estimated plant parameters found by system identification. My formula is a little more complex that the IMC formulas but the response is faster/better for the same closed loop time constant. I doubt the extra complexity in the formula is worth the effort for most applications.
7 Page 7/10 simulates the PID control of the original system using the gains calculated from the system identification. Notice that feed back noise is simulated as well as the dead time.
  1. Page 8/10. The simulation show the response. The response isn't perfect because there was noise in the original data used to do the system identification. The system identification is not perfect because the poles are closer together than they should be and I simulated noise on the feedback but this is closer to reality.
9 Page 9/10 uses the internal model gain formulas that I got from the
formatting link
site. They work well too and are much simpler they don't work quite as mine. I should have provided a IAE value for my gains and the IMC gains for comparison. 10 page 10/10 shows the IMC response is a little slower but most would be please with it.

I would use the above technique one at time with each heater and temperature sensor. Actually one can excite each of the heaters one at a time but the data for the 3 temperature sensors at the same time.

I posted a link to a scilab program that does the same thing many years ago but no one seemed interested.

JCH, you should copy this so your program can handle dead times, arbitrary inputs, and poles that are not all at the same place. What you appear to be missing the quasi-Newton code( BFGS) or Levenberg- Marquardt code that allows you to do proper optimization. I bet you use a grid search.

Peter Nachtwey

Reply to
pnachtwey

When starting the identification process the system must be at steady state. The three temperature sensors are at different temperatures. That could be steady state for a combination of heater outputs but it is hard to know. If all the heaters started at the same ambient temperature then I know the system was at steady state.

Peter Nachtwey

Reply to
pnachtwey

On Nov 15, 6:14=A0am, "JCH" wrote:

snipped-for-privacy@x5g2000prf.googlegroups.com...

We seem to have a disconnect here. The system is MIMO which means that a finite model would have a set of simultaneous differential equations. In my case three independent variables drives and three dependent variables; leading to three simultaneous differential equations whose order varies with the number of state variables needed for an adequate description. Including the room temperature we actually have four drives. Including the various components inside the instrument (motors, solenoids, and doors) we would have more; but for the sake of simplicity I took 3 drives and 3 sensors and treated the other drives as disturbances. A design assumption that could have been rendered wrong by results; but then I would have had to add more sensors and possibly more heaters. The reason for the 3 heaters and sensors is to establish control over extended mechanical assemblies having basically an infinite numbers of internal states. Although the higher order states are rapidly suppressed by the heat equation when the metal thermal time constant is short. As an illustration: The simple case of the sun warming a piece of ground through the seasons. The result is basically that a 20 degC surface variation causes .5 degC variation 2 meters down with a six month lag; with the transfer function having an infinite number of poles and a continuously rolling phase shift going through 180 deg over and over. This imposes constraints when you are trying to hurry it up via control systems. These numbers are "representative" since I am remembering; I do have the book Bell Labs book somewhere that solves the equation. Alternately: Writing the Green's function for the internal temperature of a bar heated at the surfaces requires an infinite degree polynomial resulting in an infinite number of poles in the Laplace xform. But the significance of higher poles drops down exponentially, so they don't matter unless you try to wrap a control loop and close the loop with time constants that are comprable.

And so on Ray

Reply to
RRogers

On Nov 15, 6:14=A0am, "JCH" wrote:

snipped-for-privacy@x5g2000prf.googlegroups.com...

We seem to have a disconnect here. The system is MIMO which means that a finite model would have a set of simultaneous differential equations. In my case three independent variables drives and three dependent variables; leading to three simultaneous differential equations whose order varies with the number of state variables needed for an adequate description. Including the room temperature we actually have four drives. Including the various components inside the instrument (motors, solenoids, and doors) we would have more; but for the sake of simplicity I took 3 drives and 3 sensors and treated the other drives as disturbances. A design assumption that could have been rendered wrong by results; but then I would have had to add more sensors and possibly more heaters. The reason for the 3 heaters and sensors is to establish control over extended mechanical assemblies having basically an infinite numbers of internal states. Although the higher order states are rapidly suppressed by the heat equation when the metal thermal time constant is short. As an illustration: The simple case of the sun warming a piece of ground through the seasons. The result is basically that a 20 degC surface variation causes .5 degC variation 2 meters down with a six month lag; with the transfer function having an infinite number of poles and a continuously rolling phase shift going through 180 deg over and over. This imposes constraints when you are trying to hurry it up via control systems. These numbers are "representative" since I am remembering; I do have the book Bell Labs book somewhere that solves the equation. Alternately: Writing the Green's function for the internal temperature of a bar heated at the surfaces requires an infinite degree polynomial resulting in an infinite number of poles in the Laplace xform. But the significance of higher poles drops down exponentially, so they don't matter unless you try to wrap a control loop and close the loop with time constants that are comprable.

And so on Ray

Reply to
RRogers

uite understand your approach; it seems different from what I

controguru.comsite. =A0They work well too and are much simpler they

Peter, Sorry I didn't answer earlier; I was answering JCH and putting the data up. I considered the route you illustrated, and perhaps I should have tried harder. But in your example in the above text you implicitly assumed (by writing the equation that way) that a good description was accessible through a single state variable per heater/ sensor, and I ran into intellectual problems trying to have the flexibility for extension. I did read your earlier posting and will reread it. The problem I had was this: Suppose the correct set of terms for sensor one was: x'=3DAx+Bu y=3DCx+Du Where u is heater power, y is the sensor readings, and x is the internal state vector larger than u or y . Now a set of individual SISO readings and using supposition would result in individual state vectors xi and Ai,Bi,Ci,Di . Some the state vectors might be shared between the individual responses and some not. How do you determine which ones are shared or not? I am sure I can make up a circuit with discrete components that would illustrate this. This interpretation problem could be resolvable, but I didn't see how to do it. If I am missing your point please post an example with more state variables.

It does seem as though some generalization of Thevin's equivalence circuit theorem might be possible and applicable. Maybe using Telegin's theorem? But these thoughts are just meanderings of my mind.

Ray

Reply to
RRogers

Peter, Okay, I will post that experiment but it's not as clean. Since I only had shared access to the prototype I couldn't let the machine cool down long enough for a real restart, and (of course) the room temperature changed. These thermal systems have really long "tails"; some sections (plastic) absorb heat and let it out very slowly.

Ray

Reply to
RRogers

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.