pseduo-realtime?

As part of the $500 robot project, I have been writing posts that explain why real-time systems can be considered unnessisary for many classes of applications once thought to require them.

As before, the theory is this: (short form) Since modern CPUs can keep track of high resolution eleapsed in hardware, regardless of the interrupt mask, a module can *always* know, exactly, its position in time.

Since an application can know the precise change in time, many algorithms can be modified to use this information as a variable instead of a constant.

What remains is the variability of scheduling. What is the maximum time between intervals? What is the avergave time? How often does the maximum time happen? Is there a maximum time? Lastly, is there a point at which the variability will cause the system to fail? These are important considerations, and of course, the engineer should take steps to mitigate their impact or magnitude.

If the worst case scenarios are within operational parameters (or something tat can be recovered from), then many of the specialized controllers can be eliminated.

Reply to
mlw
Loading thread data ...

This is very true.

Yup, that is sort of te idea of cluster computing, which I whole heartedly embrace -- in the right context.

That is a standard computing engineering principle.

This is the sort of thinking that I wish to change. If you *NEED* to run

*EVERY* 100ms, then this system is not for you. If you can average 100ms, and cope with 500ms on a rare occasion, then we have something.

Sort of the point I have been trying to make.

Or can package or present the technology in a usable form.

Perhaps. It could simply remain a hobby, I'm not sure where I'm going with this. Popularity, adoption, and speaking engagements would be nice, but right now ....

Actually, the skill set I am targeting is the run of the mill peron interested in basic electronics and some computing.

Thanks.

That is sort of my target. The minimum amount required to build a robot.

I an thinking the opposet, I think robotics is too intimidating right now for people. There are too many things to be worried about. Too many different technologies that do the same thing (computing). Why do you need to program, in a different envionment, the various pieces?

Just use a general purpose computer to do your computing. Need more processing? Add another general purpose computer.

All the same tools, all the same environment.

Appreciated.

Reply to
mlw

The problem as I see it is that a single processor can only do a single thing at a time. It can interleave the commands sent to it but can still only do things one at a time.

Those specialized controllers basically add mroe processors. The more processors the more you can do at once and you dont have to rely on the low command interleave time (high speed of the processor).

As long as you can keep caching commands to feed the processor, you should be ok, provided the total number of commands between important steps is less than the absolute maximum time to allow between the commands.

What I mean is that if you have commandA that absolutely MUST be dealt with every 100ms (long time, I know, but jsut for reference) then any other commands that can take place between commandA(1) and commandA(2) should total up to no longer (preferably less than) 100ms.

With diligent programming this is achievable, especially with modern processors that are tens or hundreds of orders of magnitude better than even only 5 or 10 years ago.

With the advent and impending proliferation of main stream dual core processors (IBM power5 looks especially impressive!) multithreaded control applications can make this an even more real ability than ever before, as long as (and this is the catch) you have the skills to implement it reliably.

I've been following your posts for a long time now and I have no doubt that YOU can pull it off. You seem to know your cookies when it comes to your choice of programming environment. However, you are designing this system for people who may or may not have your skillset. All the documentation in the world isn't going to alleviate the learning curve, and it's entirely possible people may pass over your product for something they're more familiar with.

On the other side of that coin, there are probably more programmers out there than hardware "engineers", and if you eliminate the hardware fromt he equation, it could (and probably would) open up a whole new segment of users who up until now wouldn't have given robotics a second look because as far as they're concerned, ohm's law has something to do with buddhist chants.

I'm not arguing against you (as, yes, I have alot in the past). The fact that you've put what you've been preaching into practice gives you much more credence in my mind, so I dont feel the need to do that anymore. I can tell you from my experience, though, of someone who has been fighting the business battle for ~5 years now, you have to get past the "if it can be done I'll do it" thought, and start thinking more along the lowest common denominator lines. From a business prospective (sp?) you dont want to back yourself into a corner in an already saturated market. You want to be able to pitch your system as being better than the rest, while at the same time allowing people to not be too intimidated to buy into your system, as compared to someone elses.

Just my $.02

--Andy P

mlw wrote:

Reply to
Andy P

Don't be so sure. If packaged correctly, no one will know it is a "techie" product.

And what happens when VB is not longer available? Even so, it can be packaged as an OCX or VB plugin, or even a Windows DLL.

We are using "environment" in different contexts.

I can support Windows, it it ever comes to that.

The language "I" use to develop a system is largely irrelevant as long as a proper API is available to "your" language.

As for the OS, as I said, if push comes to shove, I can support Windows -- easily. I have a whole cross platform library that works across many differnt operating systems, but I'm betting that OS/X is a better bet.

Reply to
mlw

There are plenty of examples of that.

Reply to
mlw

Why would there need to be any in the first place? If your algorithms are designed with the reality that they don't have rigidly fixed periods, then there is no problem.

What computational power? a couple floating point calculations are nothing to todays processors.

Actually, most of the control math uses the time as a vaiable. Many control algorithms use a fixed change in time as a shortcut.

I would say trivially.

Yes, in a precision environment with absolute perforamance requirements you need a tightly controlled loop. In a mobile robot that is going over various obsticles and/or uneven surfaces and a load that changes on the order of magnitudes, the kind of precision you are talking about doesn't -- can't exist. In a world in which an 1/2 an inch is about the best you can do, being exact to a thousandth makes no sense.

Still, all the math uses time as a variable, thus varible time should not be a problem.

Actually, I don't see much of a theoretical reason why it would be less stable or work less well. There is a range of time periods that will work with any system. If the time period is too short, the resolution of your encoder system becomes a problem, for instance, your sample period is between one and two counts of the encoder at the current speed. Gut calculation says uncertainty must be below 10%, so your period can't be faster than 10 encoder counts. If the time period is too long, your encoder overflows and causes an error, or your system responds too fast to changes an starts to become erratic. As long as you fall between the two extreams, there is no real reason why it would not work.

Of course, how well the system maintains speed can be affected by sample rate (if the load is somewhat dynamic), but it is also a product of the variability of the environment. Motors are not infinitly powerful, rapid, sizable, and unpredictable changes in load will cause vaiability in speed.

Download a working model from

formatting link
:-) Actually, it is not a dramatic creation, it is just a PID calculation where change in time is not a constant.

Reply to
mlw

They will pass over the product. It is a techie programmer product without any intention of meeting the user half way.

And most of them use Visual Basic.

mlw wrote:

But there is more than one programming tool and more than one environment. We get back to the problem of "intimidation" of a programming tool and OS that is not main stream. Most likely more people into robotics understand uP than have a grasp of your particular choice of OS or language skill requirement.

JC

Reply to
JGCASEY

This is where I see the main problem with this approach: finding out what effects the non-periodic sampling of the inputs and the outputs will have on the overall system behavior. By finding out I mean characterizing the system before implementation, before actual measurements could be done. I also find measuring these effects rather troublesome which makes an analytical approach even more important. I also find it somewhat hard to justify the non-worst-case computational power requirement if the worst-case behavior is good enough, but that's a different story...

Digital control theory and in general discrete time systems theory uses periodic samples and derives its conclusions from those type of signals. While qualification of systems and signals with non-periodic sampling can most likely be done, it's much more complicated than traditional, periodically sampled systems. There are ways to approximate a non-periodically sampled system (one way I've showed you in a previous post). Another way is to consider the amount of non-periodicity in your sampling as a clock jitter (and as such a type of noise) and your way of dealing with it as a noise-cancellation process. I'm sure there are others. Either way, I'm fairly certain that the behavior of your system will be somewhat different from a periodically sampled equivalent and the difference will be non-trivial. For example finding out the mean-phase-margin of a control loop, controlled by such a non-periodic controller would be rather hard.

I guess my point is: in some cases (maybe in many cases) the 'it's working good enough' engineer approach might work but there will be times when a more elaborate mathematical underpinning will be required. Such a general underpinning is not easy to develop and I certainly haven't seen one. Of course if you're planning to do that as well (i.e. developing an aperiodically sampled systems theory) I wish you the best. And definitely want to hear about your results!

Regards, Andras Tantos

Reply to
Andras Tantos

If you do that then great. It would be cool to have an API for C and some example code on how to use the API that would enable me to grab images from a webcam into byte arrays in real time.

JC

Reply to
JGCASEY

Why would it need to be proved beyond the obvious? I'm not sure why you insist that regular rigid timing is required. Just because people have been using a fixed period in order to eliminate the change in time calculation doesn't mean that it has any inherent accuracy.

Many sampling algorithms use pseudo-random sampling periods in order to detect trends that may be missed by fixed frequeny sampling. Fixed sampling rates does not impart any additional accuracy i'm aware of.

You're not serious are you? There is so much absurd about this statement in the context of a robot. This is a thread in and of itself.

This is a silly argument, if the system just sitting has a 1.0% utilization, and a 2.0% utilization when running motor control, I think we can saftly say it is trivial.

Actually, I was talking aout te theory as well.

Which theory? Standard PID? It was oriinally built with analog components.

ok

You are building on the assumption that one *has* regular samples. If you take that away, then you need to think about the problem differently. Don't tink about the most efficient algorithm, this may be trading a bit of efficiency for an easier deployment.

A fixed sampling period makes the assumption the each sample is linear change in state. Over time you plot the curves. In a system with a variable sampling period you interpolate what would be the points based the same assumption that the change in state is linear.

As you said, my system proably has to sample more often than a fixed sampling rate system, thus, on average, my accuracy will be better.

sample/time, the same representation you use, except that your time is implied in the samples by virtue of a fixed sample rate. Mine is explicit and no less accurate.

You were trying to simulate a fixed sample rate system with variable samples, why bother? Just se he ariable sample rate.

It is basically incorporting th change in time along with the change in state, no more, no less.

You can only know the precision inherent in a system.

Exactly, "the required precision" translates to the "possible precision." Trying to design beyond the possible precision is impossible.

How so?

I see it a different way, there is nothing inherently nessisary requiring a fixed sample rate. I could build a PID circuit out of a few opamps and a handfull of passive components, with no sample rate. The importance is change in state vs change in time.

Reply to
mlw

You yourself said that: "many algorithms can be modified to use this information as a variable instead of a constant.". The natural question to ask after such a statement: OK, they can, but how such a modification would effect their performance (performance in its broadest sense)? You're saying that there is none (am I wrong?) and I'm saying: prove it! What I'm saying is that the *analytical* answer to this question is not an easy one, and in *some* applications the analytical approach is required. You cannot always just build it and try it out.

Any additional (floating point) instructions burn more power and reduces battery life. You might say it's negligible but in some cases it might be important. If in average you run twice as fast as the worst-case, and the worst-case is still good enough, you used up twice the (computational, battery, whatever) power than you needed. The more you push your system to the limit, probably the more apart your worst-case and average timings get from each other. If you don't push your system to the limit, your wasting resources, you're using a more powerful computer than you need to. But again, this is not the my main point, it was just a side-note.

The algorithms maybe, but the underlying theory not.

The theory uses the assumption that the input signal is bandwidth limited. It is a consequence of that, that there exist a minimal (regular) sampling rate that would result in a data-stream that completely describes the input signal. It also follows that additional samples (more frequent sampling) doesn't have any additional information in it. If you say that there is additional benefit to (irregular) more frequent sampling or saying that you can (intermittently) live with below-minimum sampling rate, this assumption is not valid any more. If you're saying either of these than traditional theory cannot be applied to your systems, and if not I fail to see the benefit of such a system.

The theory uses (discrete) Fourier transforms in many cases and frequency-domain representation to describe system behavior. The discrete Fourier-transform doesn't work (doesn't give you meaningful results) if your signal isn't regularly sampled. Frequency-domain analysis becomes impossible, pole-zero configurations, transfer functions in Z domain become meaningless.

For discrete-time systems the theory assumes that a series of real numbers completely describe a (BW limited) time-domain function, and a set of these (inputs and outputs, one series for each) completely describe a (linear) system. Even this assumption isn't valid any more. Your signal domain changes, since you can't represent your input signal with a series of numbers any more. You have to use a series of pairs (sample, time), or some other representation.

That's what I meant by traditional signal analysis not being applicable. You can approximate such systems with continuous-time or discrete-time (and regularly sampled) equivalents. I've shown you one example before for you PID loop. I'm sure there are other ways to do it.

Any references? And I mean references where it is shown that it is 'trivial' in general, not for a couple of particular applications.

True. But it doesn't mean that not knowing how imprecise you are is acceptable. Good design takes the uncertainty of the measurements (inputs) the acceptable uncertainty of the controls (outputs) and calculates the required precision of the system between the two. If your implementation is within limits of this required precision, you're good, if not, you're not. What I'm saying is answering this question is hard(er) to for a non-regularly sampled system than otherwise.

What you're saying is: I've done it once, so it can be done. What I'm saying: just because you've done it once it doesn't mean it always can be done. I would love to see proof that I'm wrong but so far you've failed to provide that.

Regards, Andras Tantos

Reply to
Andras Tantos

Well, my project has only to do with mobile robotics, the webcam stuff is not anything I would be adressing.

That, of course, is your opinion, one, which you should know, I do not hold.

Reply to
mlw

Then I stand by what I wrote, it is a techie project. If it requires searching the net for program fragments then it will not be packaged correctly and everyone will know it is a techie product and be suitably intimidated. I am :)

It is a techie software product rather than a techie hardware product.

Nothing wrong with a techie product providing you don't imagine that somehow it will have wider appeal. This will have to be left to the businessmen who really understand the needs of a wider market and can employ the techies to produce the robot that others can use to try out their vision for the future of robotics.

As for the techies themselves I would suggest that they are not in the least bit intimidated by "too many different technologies" and in fact most of them would love all that stuff.

And any up and coming techies have to learn all about microcontrollers etc so of course it makes sense to use them in their projects.

JC

Reply to
JGCASEY

Maybe I had a little practice, I've worked on a number of projects in which this thinking was "obvious." Anyway, that link looks cool. I'm going to have to slog through it.

Correction, you don't know when it "will" be taken, but you know exactly what it was taken.

My first version was like that. It has since taken the elapsed time into account.

What's the most recent date of the code you've seen?

In the strictest sense, there is a line between each point. If you have some pre-conceived wave form, you can fit the shape to the points and interpolate (if of course the points can make that shape).

Yes, I know that the real signal is not linear, but we assume it is for calculation. I thought that was the point I was trying to make.

I wouldn't go that far.

ok

And the system is unstable, abslutely, without some upper and lower limits for T.

Think about upper and lower limits for T, and you'll see it is pretty obvious.

Reply to
mlw

With some caveats....perhaps

We've had this conversation before there is no longer anything like being "swapped out." In the old days when memory management was not so fined grained, whole processes were "swapped out." These days were are talking about advanced page swapping mechanisms that can virtualize in very small chunks. 4K and 4M.

There are *many* mechnisms to keep yourself from having your data paged.

Your preriodic process would likely be high on the MRU list and only consist of a few pages. Thus making LRU selection unlikely.

Lastly, buy more RAM or make vital pages non-swappable, run in a high priority.

Yup, hardware devices do lock up, but "good" operating systems can carry on just fine while that reader process is blocked. If you have a situation where many processes are blocked, say because a common disk is hosed and not responding, then those processes waiting for the disk will probably die. That is a catistrophic failure, and probably acceptable to failsafe the system and shut down.

Just be a very high priority task.

The plug to a life support system can be pulled by an idiot as well. So?

Yup, but as long as T is within some variation, ad there is a deadman switch, the ting is safe.

Also, don't discount that this system is not running on someone's desk. You are right to assume the few processes will run on the system, that hardware with good drivers will be selected, etc.

This may be a PC based system running Linux, it may be built sing general purpose components, but it is ardly a general purpose application.

Reply to
mlw

Well, I did because so far noone (including you) have shown me a proof that such a generalization is valid. But I did my homework, looked around, and I have to admit, there *is* at least one theory that handles irregularly sampled discrete time systems. One relatively concise desciption is here:

formatting link
Now, maybe it is just me, but I've read it through and I wouldn't say it's 'obvious'.

I can see how random sampling can trade aliasing effects to noise (i.e. make the aliasing effect more noise-like), but that trade requires uses 'randomness' i.e. that you don't know when exactly the sample was taken.

For your PID implementation I've shown you that it is identical to a traditional, fixed sampled PID controller with noise on its output. I've shown you that the additional noise on the output is the result of your non-fixed sampling. In that sense you're controller is less precise than that identical controller could have been: your changes added noise to its output.

Nope, it makes the assumption that the original contious time signal is BW limited, or it's spectrum is periodic. Every linear (discrete or continous) system has the assumption that the state-transitions are linear.

I don't see what you mean here. Are you saying that the input signal changed linearly between sampling points? If that's the case, that's not an assumption (if it is, it's false), that's an apporximation. That's not even true for plain-vanila BW limited, regularly sampled signals.

None of the transformations (distrete Fourier, Z-transform, even the sample-domain diference equations) are equipped to handle non-constant sample intervals. Your very tools to analize the system become useless and as such the conclusions derived with those tools will become as well.

There are tools to handle theses systems, thats true, I've just learned that. But those tools are different. Again, see the above reference.

To be able to apply customary math and draw some conclusions.

Well, for one, one of the results of the above referenced work is that the state-transition matrix for a non-repetatively sampled system (i.e. when not only the sample time is not constant, but non-periodic as well) is not time-invariant.

Math over time-variant systems is much harder. It is not as easy to provide closed-form equations for properties of the system. For example they say that the stability-criteria for such systems is this (with the state-transition matrix being T:

lim (T^N) = 0 N->inf

If T is time-variant (i.e. T is a function of N) than this cannot be brought to a closed-form. This means that stability of such a system cannot be analytically prooven, only approximatly postulated.

In conclusion. I have to correct myslef: there is math and theory to handle your type of systems. It is however not 'obvious' at least not to me.

Regards, Andras Tantos

Reply to
Andras Tantos

Of course, the gap between your two viewpoints is this:

one of you says "T unbounded on the high-side, and therefore stability (and even controllability) cannot be guaranteed"

the other says "T is 99.9% likely to be within reasonable bounds, and that is Good Enough".

If you were to manufacture a product using the latter model, you would have to do a lot of work before you could legitimately apply a CE mark to it.

Note also that, in the case of a general-purpose multi-task computer system, there

*is* no upper bound for T. Examples: 1) Some other program running uses all RAM, and your program code gets swapped out. T is now >> 20mS. 2) A hardware device fails to respond while being polled, and the device driver locks the machine up until it responds or a hard timeout occurs. 3) A higher-priority POSIX-real-time task runs to completion on a very slow job. 4) The user pauses your task.

Note also that, as well as the *scheduling* delays, there may well also be *communication* delays: you issue a command, but owing to the way the driver and the comms. hardware is, there is a variable delay before your command is received, and similarly with data flowing from the hardware. (Which is why I like to stuff the rt_get_cpu_time_ns() in a struct sensor whenever it is updated: means I can produce a useful derivative estimate as well).

cheers, Rich.

Reply to
Rich Walker

MLW:

Is there some reason why you are not using a real-time extension to Linux?

-Wayne

mlw wrote:

Reply to
Wayne C. Gramlich

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.