That’s my point. You essentially need to add a qualifying statement to make decimal work, and even then people don’t naturally understand the precision. In your example, most people think the precision is the last bit (.02), whereas it’s actually .04 since it represents the error on either side of the measurement.
But what if your precision is greater than 1/100 but not 10 times as precise?
If you have 0,7 that is more precise than 0,7 and less precise than 0,7. You can just say 0,7 ± 0,02.
That’s my point. You essentially need to add a qualifying statement to make decimal work, and even then people don’t naturally understand the precision. In your example, most people think the precision is the last bit (.02), whereas it’s actually .04 since it represents the error on either side of the measurement.