Having to provide a "+/-" for a measurement is a silly alternative to using a measurement that already includes precision. You're just so used to doing things a stupid way that you don't see it.
providing an arbitrarily non-reduced fraction is an even sillier alternative. the same fundamental issue arises either way, and it’s much clearer to use obvious semantics that everyone can understand
How do you represent 1/64th in decimal without implying greater or lesser precision? Or 1/3rd? Or 1/2 or literally anything that isn't a power of 10?
You're defending the practice of saying "this number, but maybe not because we can't actually measure that precisely, so here's some more numbers you can use to figure out how precise or measurements are"
How is that a more elegant solution than simply having the precision recorded in a single rational measurement?
Having to provide a "+/-" for a measurement is a silly alternative to using a measurement that already includes precision. You're just so used to doing things a stupid way that you don't see it.
providing an arbitrarily non-reduced fraction is an even sillier alternative. the same fundamental issue arises either way, and it’s much clearer to use obvious semantics that everyone can understand
It's not the same issue at all.
How do you represent 1/64th in decimal without implying greater or lesser precision? Or 1/3rd? Or 1/2 or literally anything that isn't a power of 10?
You're defending the practice of saying "this number, but maybe not because we can't actually measure that precisely, so here's some more numbers you can use to figure out how precise or measurements are"
How is that a more elegant solution than simply having the precision recorded in a single rational measurement?