precision

A predefined variable which defines the number of decimal places used when reading floating point values.

Syntax

precision

Comments

This variable can be used to set the precision used when converting ASCII digits from the data file into their binary representation in memory. If precision is set to less than ten, a custom algorithm is used to convert the number, resulting in a conversion time of roughly half what is normal for the C function atof. The custom algorithm only handles numbers containing nine significant digits or less.

Example

set precision = 12  // be slow, but accurate
set precision = 6  // be fast, but risk losing digits