Hi all.
Say you have a system you want to control, which has an output you want to maximize. When you increase the input variable, you can get two different results:
- either the variable is already at its maximum, so you get no variations
- or the variable drops a little, so you need to decrease the input variable to make it go up again
The problem is: you want your input variable to be the maximum you can get, as long as your output variable is increasing or staying at its max.
I solved the problem using some kind of "fuzzy" (say commonsense) control, but I'd like a more formal solution, for which I could study stability and performances.
Thanks for any help