After many years, I've come across a discontinuous analyser again, not a chromat but behaves the same. Takes a sample, and about 15 minutes later the analysis output updates. I vaguely recall an old trick when applying PIDs to these types of signals, it was to apply two time constants in series to the analog sampled output, then use that signal as the PID input.
What I can't recall is whether the value of each time constant should be half the sample time of the analyser or the full sample time. Has anyone come across this, can recall what the rule of thumb is?
We're obviously looking at more sophisticated algos, but there are attractions in this case to using a standard error-based controller.