Danfoss (Siemens) Mag Flow Validator

Can anyone explain the operation of the Danfoss (Siemens) mag flow meter validator used with the Ad-Mag 5000 mag flow meters. The verification reports produced by the software from a validator test, gives a resultant output against theoretical over three points 0.5, 1.5 &

3.0 m/s. The output at these flow rates is compared against theoretical flow rates. The theoretical flow rate on the reports, gives an output of 8mA, 25% of a 4-20mA range. Why does this instrument not compare over the full range of a 4-20mA output? TIA
Reply to
BIGEYE
Loading thread data ...

Because three points are all that is required to determine the offset, gain, and linearity of the 4-20 output; there's little need to drive all the way to 20 mA. Also, the Verificator may not be capable of generating the magnetic field equivalent to 20 mA (12 m/s).

--Gene

Reply to
Gene S. Berkowitz

You can determine the linearity only of that part of the range encompassed by the three points, and then only approximately. Only a graph covering the entire range is ironclad.

Jerry

Reply to
Jerry Avins

On Fri, 17 Nov 2006 23:20:33 -0500, Jerry Avins proclaimed to the world:

A three point check is the minimum acceptable calibration check generally accepted in any of the standards. While what you say is correct in principle, reality is much different. Taking readings at

20, 50 and 80 percent and extrapolating the two extremes is preferable to 0, 50 and 100 percent checks. There are several reason for this. We can go into this later if you like. Some of them come from the legacy of analog sensors.

To answer the original question, I would guess the validator was engineered this way because they felt it more important to check the meter output in the range where the most errors happen. Mag meters have the most errors in the first 20 percent of it's range. As the magnetic field increases, the signal to noise ratio increases, so the upper range of the meter is more reliable and accurate than the lower areas. Checking the accuracy of the electronics in this range makes sense but I would still include a point near full scale. There may be some limiter that I don't know of that made it difficult to simulate higher flow rates. As I have said before, the question would best be answered by calling the manufacture and talking to the guy who designed the validator electronics. It would be nice if BIGEYE did this and let us know what they said.

In the application here, the only way to check the function and accuracy of a mag flow meter is to simulate the field conditions of the coils at some predetermined rate. This only checks the electronics. The other check that can be done on site is by measuring the resistance of the coils, both the winding resistance and the insulation with respect to ground. Some validators do both. Most of the errors I have measured in mag meters have come from resistance problems.

In mag flow meters you are always assuming something. If the conditions in the primary measuring element are within design specs then a voltage linear to the liquid velocity is produced. Calibration is a matter of verifying that those conditions exist.

This is in opposition to the premise of calibration traceability in it's purist form but real world calibrations are often this way, particularly in flow measurement. Doing a direct comparison of flow is very rare. Only a few labs exist in the world have the capability to do direct comparisons. I had a rather detailed investigation into what could be done to establish a better chain of tracibility for bunch of

18 and 24 in mag meters used to measure waste water flow being pumped to a central treatment plant. Each meter was used to bill the different municipalities. We had problems with the accuracy of the meters, which were 20 years old. We used doppler meters to compare the measurements, but doppler meter are not as accurate as mag flows, out of the box. I usually try to have a "standard" that is four times more accurate. The combination of this comparison and doing extensive mag meter checks were the best solution we could find until the mag meters could be replaced. We found that the errors were all caused by breakdown in the coil insulation at the potted penetrations.

BTW we also attempted to do drop tests, measuring the flow by the change in level of the wet wells. I did a bunch of measurements of the concrete tanks and quickly learned that as little one inch change in the dimensions of the tank would produce errors in my calculation that exceeded the target accuracy we were trying to prove (0.5%). Still, using two separate comparisons in combination with using a validator (simulator) and having all thee measurements agree is a very powerful pacifier. I would love to hear a better solution to flowmeter calibration short of removing it and shipping it back to the manufacture.

Sorry for the ramble.:-) At least I didn't tell any old navy stories this time. Perhaps there is an answer of sorts in there somewhere.

Reply to
Paul M

Paul, You wrote a lot of important information; that's not rambling. A three-point check is a useful and usually adequate way to verify that a system that once worked properly still does, but it can provide no new information outside the checked range. For example, a 4-to-20 ma transmitter may be incapable of driving more than, say, 15 ma at the burden it carries, but be quite linear at lower currents than that.

Emphasizing the lower parts of the range when calibrating is often a good idea. To achieve anything like the relative error of the upper part of the range, the absolute accuracy must be greater there.

Jerry

Reply to
Jerry Avins

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.