• chiliedogg@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    5
    ·
    2 days ago

    What I’ll defend, however, is fractional measurements when precision matters.

    With decimal measurements, precision can’t be nearly as granular. If your measurement is precise to one 1/8 of a unit, how do you represent that in decimal? 0.625 implies your measurement is precise to the nearest thousandth, but rounding it to 1 also isn’t precise. 5/8, however, tells you the measurement AND the precision.

    With fractional measurements, you can specify precision by changing the denominator to any number, whereas decimal is essentially fractional measurements, but with fixed denominator at powers of 10. For instance, a measurements of a half-unit with levels of precision between 0.1 and 0.10, fractional can be 6/12, 7/14, 8/16, 9/18, 10/20, 24/48, etc. Decimal can’t specify that precision without essentially writing a sentance.

    What’s simpler to record? “24/48” or “0.5 ± 0.208333…”

    • calcopiritus@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      24 hours ago

      That is not a flaw of decimals. It is a flaw of you not knowing how precision is encoded in decimals.

      0,7583 means 0,7583 ± 0,00005.

      0,758300 means 0,75833 ± 0,0000005.

      0,76 means 0,76 ± 0,005.

      That is why when in a store an item costs 7,5€, we don’t say 7,5€. We say 7,50€. Because it is precise to a hundredth of a €, not a tenth of a €.

      • chiliedogg@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        16 hours ago

        I understand sig figs. That’s my entire point. What I’m saying is that fractions don’t require the use of sig figs, and especially don’t need any “+/-” bullshit at the end when precision isn’t measures at a granularity that isn’t a perfect power of 10.

    • Programmer Belch@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      2
      ·
      2 days ago

      When precision matters, that precision is considered in the measurements. You would never put 0.5 ± 0.208333, you express it as 0.50 ± 0.21. The error value is just the standard deviation of the measurements and it doesn’t make sense to use more than 2 significant digits.

      Another example would be measuring large distances using a ruler with centimeter precision. In that case, a measurement would be expressed as 250 ± 1 cm. Converting the measurement from cm to mm, it is 2500 ± 10 mm. This is much more cumbersome with inches or feet as changing units means updating the precision, possibly reducing it.

      • chiliedogg@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 day ago

        Did I defend using imperial units?

        I’m defending recording precision without having to add a qualifying statement because you can otherwose only increase precision by orders of magnitude in decimal.

    • Jesus_666@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      That does make sense when you need absolute precision like when doing abstract math. Otherwise you can just use whichever unit and number of significant digits you need and be precise to that amount. That’s what you do with imperial/American customary units as well; a 5/32" screw isn’t going to be manufactured to the precision of a Planck length; manufacturers specify their sizes to three significant digits of an inch.

      Let’s say you have a machining project and your tools are precise to 0.1 mm. So you plan things out at a precision of 0.1 mm. It doesn’t matter that a distance is 17/38 cm exactly. It doesn’t matter that it’s 4.473684210526315789… mm. You can’t set the tool to anything better than 4.5 mm anyway.

      Also note that the metric system doesn’t prevent you from using fractions. You’re perfectly free to work with fractions where useful. That’s just not how people talk about lengths because those fractions have no meaning outside your specific use case.

      • chiliedogg@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        2 days ago

        But that 5/32 screw has its precision built into the measurement. Sig figs and error ranges aren’t required for fractional, because both are built into the denominator.

        If your 5/32 measurement is super precise you can record it as 160/1024ths, because the denominator has “+/- 1/2048” built into the measurement.

        • calcopiritus@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          23 hours ago

          As I said in another (larger) comment, you just don’t know how precision is encoded in decimals, which doesn’t mean that it isn’t. In fact, precision is encoded in decimals, just like with fractions.

          0,7 is 0,7 ± 0,05 0,7000 is 0,7 ± 0,00005

          • chiliedogg@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            16 hours ago

            I have a set of precision digital calipers that shows decimal or fractional units. Verus a worse set of calipers that’snot 10x worse, it shows exactly the same measurements in decimal units, but with fractional units it will show a difference because that difference can be represented.

    • rbos@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      2 days ago

      This hurts my brain. Why do we care about all the weird fractions? +/- 0.1 is just another way of saying 1/10. You can still do that if you want without having to do fraction math in random denominators.

      • ryathal@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        The fraction allows you to communicate length and tolerance in a single number. A decimal implies precision to the last number, a measure with a fraction can show 1/8 as more granular than 1/16. 1/8 of a cm is less precise than a mm, but if you wrote 1.125 cm, you are now implying sub mm level precision.

        This matters because the level needed in building generally doesn’t line up to 1/10 measurements. For example if you had a brick wall and a row had 1 cm height differences between bricks in a row it would be extremely obvious and look terrible. A 1mm height difference would be impossible to notice, but is also overkill to get that level. Ideal is about 5/8 cm or 6.35 mm difference over 3 meters of wall. The fractional measure often ends up easier to work with in practice.

        • rbos@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          “The fraction allows you to communicate length and tolerance in a single number”

          I don’t see how that isn’t true of decimals, too. 0.1 indicates a precision of 1 digit, 0.12 indicates a precision of 2, 0.120 indicates a precision of three.

          • ryathal@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            1 day ago

            Exactly like my example above. 1/8 implies +or- 1/16. While .125 implies +or- .0005, but it was only measured to +or- .0625, which is 2 orders of magnitude different.

            • rbos@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              In any context where it’s important, you’d note it with +/-. Not really a problem.

              I guess there’s nothing wrong with saying 1/8th metre, 1/8th centimetre, 15/16th metre either. Just as some people might use 0.356 inches.

              • chiliedogg@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                1 day ago

                I’d be a big fan of fractional metric.

                Although if we really wanted to go crazy (this will never happen), we’d ditch base-10. It’s a stupid base that we only use because of our fingers. Base 12 is superior and is actually the strongest defense of feet and inches (though yards can fuck right off). It has 6 divisors whereas 10 only has 4.

                Base 60 is also cool (divisible by 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, and 60), but that would also be significantly more difficult to teach children - it takes them long enough to learn the order of 26 letters.

                And being a geographer, I adore 360 because it’s fucking awesome to work with, and you don’t get a better composite until 2520, which is just too much to deal with.

                • rbos@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  13 hours ago

                  yeah, a duodecimal metric system would have been better. Still, it’s more important to have a standard system than it is for it to be ideal. It’s the strongest argument for US customary system within the USA, as well - but that argument breaks down when you widen the scope to the world.

                  In the 18th century context, and its dozens of competing measurement systems, something like the metric system was sorely needed just for standardization. We’re just lucky that it was something more or less sensible. Had the US customary system won out, I think we’d be objectively worse off.

                  So it could have been better, but it could also have been MUCH much much worse.

    • rbos@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      0.625 implies your measurement is precise to the nearest thousandth

      It does. If it were precise to less than that, you’d say 0.62 or 0.6 to indicate hundredths or tenths. Why would you say 0.625 if you’re not precise to thousandths? You’d say 0.62500 if you wanted to indicate precision to hundred-thousandths.

          • chiliedogg@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            16 hours ago

            That’s my point. You essentially need to add a qualifying statement to make decimal work, and even then people don’t naturally understand the precision. In your example, most people think the precision is the last bit (.02), whereas it’s actually .04 since it represents the error on either side of the measurement.

    • bufalo1973@piefed.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      2 days ago

      If you are drawing maps, a precision of meters is enough. If you are building a house, cm it is. If you are making furniture, mm. If you are working with metal, um (micrometer)

    • zaphod@sopuli.xyz
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      If I want to build something and I want it to be 23/48" ± 1/24" how would I write that? Because the way I understand it x/48" would imply a tolerance of ± 1/48".

      • chiliedogg@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        If your tolerance is 1/24 your precision isn’t fine enough enough to record 23/48.

        23/48 has a built in tolerance of +/- 1/96, because outside of that range the measurement would read as either 22/48 or 24/48.

    • BlackLaZoR@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      With decimal measurements, precision can’t be nearly as granular. If your measurement is precise to one 1/8 of a unit,

      My metric measurents are precise to 1/10th of a unit. Like 22.7°C or 34.7cm.

      • chiliedogg@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        What if you get a new ruler that’s 4 times as precise than the one you have that measures to 0.1cm? You don’t want to record it as 0.70cm, because that’s more precise than your measurement. But you could record it in 40ths with fractions.

        Another way to look at it is that decimal is already a fractional system (1/10, 1/100, 1/1000) that doesn’t allow you to use 90% of possible fractions.

        • BlackLaZoR@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 day ago

          If there’s a technical need you can have your scale divided into whatever you want. There’s nothing preventing you into dividing your scale every 0.25mm to get 1/4th precision. It’s very rarely done because there’s no need, but it’s absolutely possible.

          Thermometers have sometimes division per 0.5°C instead of 1°C

          • chiliedogg@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            1 day ago

            Yes, but how do you record that precision without needing a qualifying statement. When precision matters, “0.25” represents a measurement that is known to be closer to 0.25 than it is to either 0.24 or 0.26. Something that is only precise to 1/4 of a unit isn’t that precise. The decimal way to record a precision of 1/4 is “0.25 +/- 0.125”.

            The thing to understand about decimals and precision is that you’re still recording a fractional measurement, but your denominator is fixed to powers of 10. 0.1 is 1/10. 0.01 is 1/100. So when increasing precision by less than a factor of 10 is difficult to represent.

            This matters a lot for things like digital calipers, where a cheap set will show the same measurement as a nice set that’s more precise because the good ones aren’t 10 times as precise. But if they have a fractional setting, the nicer ones will read more precisely because that increased precision can be represented on the display.