438
you are viewing a single comment's thread
view the rest of the comments
[-] PixxlMan@lemmy.world 4 points 1 year ago

To everyone commenting that you have to convert to binary to represent numbers because computers can't deal with decimal number representations, this isn't true! Floating point arithmetic could totally have been implemented with decimal numbers instead of binary. Computers have no problem with decimal numbers - integers exist. Binary based floating point numbers are perhaps a bit simpler, but they're not a necessity. It just happens to be that floating point standards use binary.

[-] JackbyDev@programming.dev 1 points 1 year ago

What you're talking about isn't floating point, it's fixed point.

[-] PixxlMan@lemmy.world 1 points 1 year ago

Wrong. Sounds like you think only fixed point/precision could be implemented in decimal. There's nothing about floating point that would make it impossible to implement in decimal. In fact, it's a common form of floating point. See C# "decimal" type docs.

The beginning of the Wikipedia article on floating point also says this: "In practice, most floating-point systems use base two, though base ten (decimal floating point) is also common." (https://en.m.wikipedia.org/wiki/Floating-point_arithmetic) Also check this out: https://en.m.wikipedia.org/wiki/Decimal_floating_point

Everything in my comment applies to floating point. Not fixed point.

[-] JackbyDev@programming.dev 1 points 1 year ago

I generally interpret "decimal" to mean "real numbers" in the context of computer science rather than "base 10 numbers". But yes, of course you can implement floating point in base 10, that's what scientific notation is!

this post was submitted on 17 Sep 2023
438 points (100.0% liked)

Programmer Humor

19817 readers
637 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS