I am trying to learn system deviations in a map on an iterative basis (combustion adaptation for diesel engines). It is meant for compensation of unexplaneable/unmoddeled errors in the system that show a relation with torque and engine speed (car engine). The idea is to use an adaptation algorithm as a temporarely solution to decrease these system deviations and at the same time use the obtained data to learn more about the system. For this automotive application i use a linear interpolated table of size 6x6 (memory limitation) with torque and engine speed as inputs. Outside the area for which the table is defined we state that the table is outputting a zero value. There are several ways of smoothening tables. Smoothing of the table can be done in several ways. One method is to use a smoothening algorithm that goes through the complete table. This would however request too much processing power for real time processing.
A well known method is distance (or in this case "area" as substitute for increase in calculation speed) dependent learning: Lets define a function for the linear interpolated table: F(TQI,N) where TQI is the engine torque and N the engine speed. Lets define our current operating point: (TQI_mes,N_mes) (mes=measured) Then for the 4 points of the table grid surrounding the point (TQI_mes,N_mes) a factorization can be made that devides the measured system error that has to be learned over the four points on the table grid. Using the (TQI_mes,N_mes) location and the known locations of the 4 table grid points 4 areas can be defined that split up the area between the 4 table grid points. Example of area split up:
*------------* |A1 | A3 | | | | |____.________ |A2 | A4 | | | |
*------------* where "." is (TQI_mes,N_mes), "*" is a grid point of the table and A1...A4 are the areas. Suppose the left bottom grid point is calculated then fac=A3/(A1+A2+A3+A4) defines a learning ratio for this point. The learning factor tends to 1 when the left bottom grid point is approached. Now define V_mes the system deviation calculated for for the measurements taken in operating point (TQI_mes,N_mes). Then the learning step is x=V_mes-lininp1(F(TQI_mes,N_mes)) where lininp1 is a linear interpolation. The increment on the left bottom grid becomes x
*fac*alpha where alpha is an extra overall tuning factor for defining the learning speed. This is done for all four points during one learning step.
The result is a more or less smooth table depending a lot on the signal distribution properties. With unequal distributions (some points are learned more than others) the deviations become far to high.
The expected curve characteristics can not be specified in large detail. The curve characteristics that will finally be measured can vary a lot as it depends strongly on the way the vehicle/system is calibrated. This varies from a flat plane to a nonlinear plane with local minima.
As the above smoothening solution decreases the accuracy on unequal distributions it is not suitable for higher accuracy applications. I am looking for other methods that typically can smoothen data grids or learn grid points in a smoothening way with low memory resources and preferably with low calculation power requirements.