Smoothening of learning curves

I work a lot with adaptation methods on low cost embedded systems and am looking for new smoothing methods for 2D learning tables. Currently i use a method that devides a measured difference over the 4 nearest grid points based on distance calculation, but this seems to give bad results if the data distribution is not uniform. Does anybody know some good methods either based on learning of grid points or on curve fitting that can be implemented in a small recursive form with minimum increase on system resources (both memory and required calculation power)?

Kind Regards, Emiel Nuijten

Reply to
emiel
Loading thread data ...

Could you possibly explain the problem better? Possibly give a sample problem and desired solution? There are possibly several solutions depending on just what you need to do.

Reply to
Herman Family

I am trying to learn system deviations in a map on an iterative basis (combustion adaptation for diesel engines). It is meant for compensation of unexplaneable/unmoddeled errors in the system that show a relation with torque and engine speed (car engine). The idea is to use an adaptation algorithm as a temporarely solution to decrease these system deviations and at the same time use the obtained data to learn more about the system. For this automotive application i use a linear interpolated table of size 6x6 (memory limitation) with torque and engine speed as inputs. Outside the area for which the table is defined we state that the table is outputting a zero value. There are several ways of smoothening tables. Smoothing of the table can be done in several ways. One method is to use a smoothening algorithm that goes through the complete table. This would however request too much processing power for real time processing.

A well known method is distance (or in this case "area" as substitute for increase in calculation speed) dependent learning: Lets define a function for the linear interpolated table: F(TQI,N) where TQI is the engine torque and N the engine speed. Lets define our current operating point: (TQI_mes,N_mes) (mes=measured) Then for the 4 points of the table grid surrounding the point (TQI_mes,N_mes) a factorization can be made that devides the measured system error that has to be learned over the four points on the table grid. Using the (TQI_mes,N_mes) location and the known locations of the 4 table grid points 4 areas can be defined that split up the area between the 4 table grid points. Example of area split up:

*------------* |A1 | A3 | | | | |____.________ |A2 | A4 | | | | *------------* where "." is (TQI_mes,N_mes), "*" is a grid point of the table and A1...A4 are the areas. Suppose the left bottom grid point is calculated then fac=A3/(A1+A2+A3+A4) defines a learning ratio for this point. The learning factor tends to 1 when the left bottom grid point is approached. Now define V_mes the system deviation calculated for for the measurements taken in operating point (TQI_mes,N_mes). Then the learning step is x=V_mes-lininp1(F(TQI_mes,N_mes)) where lininp1 is a linear interpolation. The increment on the left bottom grid becomes x*fac*alpha where alpha is an extra overall tuning factor for defining the learning speed. This is done for all four points during one learning step.

The result is a more or less smooth table depending a lot on the signal distribution properties. With unequal distributions (some points are learned more than others) the deviations become far to high.

The expected curve characteristics can not be specified in large detail. The curve characteristics that will finally be measured can vary a lot as it depends strongly on the way the vehicle/system is calibrated. This varies from a flat plane to a nonlinear plane with local minima.

As the above smoothening solution decreases the accuracy on unequal distributions it is not suitable for higher accuracy applications. I am looking for other methods that typically can smoothen data grids or learn grid points in a smoothening way with low memory resources and preferably with low calculation power requirements.

Reply to
emiel

ok, so you have a 6 x 6 matrix with torque as one dimension and speed the other. You wish to have some function F(torque, speed) from this table. You will then also measure torque and speed, then find any error between F(measured torque, measured speed) - F(torque, speed). Based upon this error, you will attempt to create a more accurate F(t,s).

I would not assume that all values were zero outside the table. This leads to interesting and invalid constraints on an overall table formula. Let the data outside the table be "unknown".

You might consider looking at the four corners of the 6x6 table (clockwise from bottom left 1,2, 3, 4). Let deltaf/deltat= (val(2) - val(1))/ (torque2 - torque 1), let delta f/deltas = (val(4) - val(1))/ (speed2-speed1). Let your error = measured value - (f(lower bottom) + [additive linear interpolations based on delta f's). let your expected error be the error between table values (an interpolation based on the local

4 ponits) and the main interpolation.

The difference between expected error and actual error would be your impetus to learn. You could "learn" by adding some fraction of the unexpected error to each of the nearest points.

Michael

Reply to
Herman Family

My problem is not the interpolation, neither the way how to derive the difference that has to be learned. The problem is purely the technique with which the table is learned. What i described above was a technique containing a kind of "smoothening during learning" function based on the distance of the current operating point from the points on the grid. For the interpolation a standard 4-point linear interpolation was used. The error (or difference as you can call it) that is fed back is the remaining error with respect to the previous adaptation. For several reasons it is not possible to split the learning and adapting phase. This means that i am currently using an iterative loop. I am looking for anything from spline/cubic maps to basis-function networks that could represent the learning map within a limited amount of parameters and with an efficient calculation sequence. Other ideas using the grid point learning approach are of course also welcome.

Reply to
emiel

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.