I have a running Standard Deviation running and some times the Math output is "NaN" . Any idea what this means? As you can see all the numbers used for calculation look fine.
Announcement
Collapse
No announcement yet.
P1K Float "NaN" ???
Collapse
X

floating point cannot actually result in just ANY number. So this is most likely your problem.
Your formula has "to the power of half", which is the same as square root, correct? I believe this area of your formula, especially when combined with the dividing you are doing is most likely the source of your problem.Last edited by MikeN; 10092019, 02:00 PM.
Comment

https://en.wikipedia.org/wiki/NaN#Op...generating_NaN
Your code/system should be aware of these situations. Code or HMI could range check function input parameters, or test for invalid parameters and setting an error bit, or returning an acceptable "bad" but valid fp value, et. al.)
Comment

As data is collected I will get correct results most of the time. Maybe 1 out of ten will return the NaN. I am reading the diameter of 10 fibers, one at a time, with a KeyenceLS9000 micrometer. I wanted a running standard deviation and mean. My first thought was to save the data into a large 10x1000 array. then after each reading using a for/next loop step thru the array for the current new reading and recalculate the SD. When getting to 500 or more points this really slowed down the PLC scan. I played with the SD equation and came up with the equation above to calculate a running SD without saving each data point. Sooo much faster and only have to save two numbers for each fiber.
"Your formula has "to the power of half", which is the same as square root, correct?"
Yes, square root. Being a math major, it is the way I think. I did not even look for the SQRT function. Today I will try the SQRT. If this does not work I will break the equation apart into smaller groups.
Comment

Look at the s^{2} algorithm below:
https://en.wikipedia.org/wiki/Algori...ating_variance
However, the correctly specify this caveat (this is the stuff you learn as a Computer Science major in understanding the limitations of floating point arithmetic):
Because SumSq and (Sum×Sum)/n can be very similar numbers, cancellation can lead to the precision of the result to be much less than the inherent precision of the floatingpoint arithmetic used to perform the computation. Thus this algorithm should not be used in practice,[1][2] and several alternate, numerically stable, algorithms have been proposed.[3] This is particularly bad if the standard deviation is small relative to the mean. However, the algorithm can be improved by adopting the method of the assumed mean.
This latter part is easily done subtracting off an approximate mean constant from ALL the sample values (do this as you do the iterative algorithm, no need to store the adjusted values). This is algebraically valid, since you can shift ALL the data points by, say, 100, which does NOT affect the variance/standard deviation. It WOULD affect the mean calculation, but all you would have to do is add back that constant to the calculated mean at the end (e.g. +100 back to the calculated mean value).
Comment

Sorry, I dont think I put my thoughts clearly enough on why this is most likely happening. Its hard for me to put into words what I am trying to say here.
Using the square root or ^.5 will result in the same behavior as they are both doing the same thing.
I believe the problem is arising from the way you are logging the data and doing your dividing. Basically since the number is showing your average deviation, If you get a number that causes the deviation to "jump" it can jump into a negative number for deviation. This can be caused by a bad reading, a bad part, or a change in the machinery. For example in my calculations here:
If the operator changes the machine speed by too large an amount one way, my deviation will suddenly go negative for a bit as the calculation shifts and catches up to how the machine is now running. Performing math operations like square root on these negative numbers will most likely result in the NaN.
Comment

MikeN, a negative sqrt was my first thought. But the formula would never have a negative sqrt, the ^2s make sure of this.
franji1 , thanks for the link. Good reading. The part on "incremental computation" is exactly what i am doing. A running SD without having to step thru all the data. I see where the data shift would work. I might implement this even thou precision is not the problem.
I broke the equation into 3 parts and everything works fine. Im guessing the first equation is just to much for the PLC.
Comment

I would highly recommend that you compare RunningTemp1 >= RunningTemp2 BEFORE you subtract them (and take the square root). The precision of BOTH terms is independent, so if there is already a "relatively" small SD compared to the two sums, i.e. you are dealing with LARGE Sums of Squares and large SQUARES of SUMs, the delta COULD be negative.
Comment
Comment