In v3.3, the default printing of floating points was changed from using the format string '%.16E' to using the format string '%.16g'. This allows printing of many floats to be limited to the digits that were of interest, However, this could result in missing precision.
Doubles can have up to 17 significant decimal digits. The E format prints one digit before the decimal point and "precision" digits after, while the g format, however, only prints "precision" digits total, so only 16.
To illustrate, in versions prior to 3.3,
7.6000000000000001 asString -> 7.6000000000000005E00
and in v3.3 and later:
7.6000000000000001 asString -> 7.600000000000001
This change to 16 digits results in printing with insufficient digits to resolve the correct float.
'7.6000000000000001' asNumber
returns OOP 9360281465526838902
'7.6000000000000005E00' asNumber
returns OOP 9360281465526838902
'7.600000000000001' asNumber
returns OOP 9360281465526838918
using #asStringLegacy provides the printing as in earlier versions.
Last updated: 11/7/17