253
submitted 1 year ago by Sivecano@lemmy.dbzer0.com to c/196
you are viewing a single comment's thread
view the rest of the comments
[-] barsoap@lemm.ee 16 points 1 year ago

Symbolically, sure, but then you're not dealing with infinities you're just representing them.

It's a meme it's playing fast and loose with things but the general gist is that mathematics, to this day, doesn't really care about Gödel/Church/Turing, incompleteness, the halting problem, whatever angle you want to look at it from. Formalists lost the war and they simply went on doing maths as if nothing had happened, as if a system could be simultaneously complete and consistent. There's people out there preaching to the unenlightened masses but it's an uphill battle.

[-] kogasa@programming.dev 9 points 1 year ago

Math went on because it doesn't matter. Nobody cares about incompleteness. If you can prove ZFC is inconsistent, do it and we'll all move to a new system and most of us wouldn't even notice (since nobody references the axioms outside of set theorists and logicians anyway). If you can prove it's incomplete, do it and nobody will care since the culprit will be an arcane theorem far outside the realm of non-logic fields of math.

[-] barsoap@lemm.ee 4 points 1 year ago* (last edited 1 year ago)

You wouldn't even notice if some proof is wrong because it relies on an inconsistency that's the issue. And that's before you didn't notice because noone builds anything on axioms but instead uses fragile foundations made of intuition, hand-waving, and mass psychology.

Incomplete is fine that's exactly what constructive maths is doing.

[-] kogasa@programming.dev 4 points 1 year ago

You wouldn’t even notice if some proof is wrong because it relies on an inconsistency that’s the issue.

You wouldn't notice because there's no realistic chance that any meaningful result in the vast majority of math depends strictly on the particular way in which ZFC is hypothetically inconsistent.

And that’s before you didn’t notice because noone builds anything on axioms but instead uses fragile foundations made of intuition, hand-waving, and mass psychology.

This is a ridiculous attitude. Nobody uses the axioms of ZFC directly because that would be stupid. It's obviously sufficient to know how to do so. There is literally no difference to the vast majority of all math which particular axiomatic formalism you decide to use, because all of those results are trivially translatable between them.

[-] barsoap@lemm.ee 1 points 1 year ago

because all of those results are trivially translatable between them.

Then go ahead, reformulate everything in CoC or HoTT or similar. I'm waiting. Prove it trivial, prove that what you did is actually consistent.

[-] kogasa@programming.dev 2 points 1 year ago

Hence why what you're saying is stupid. You might as well say "all computer science should be done in binary." What you're saying is completely unreasonable and has no bearing on actual mathematics.

[-] barsoap@lemm.ee 1 points 1 year ago

Are you saying that the proofs of the four colour theorem aren't proofs, aren't mathematics, or something to that effect?

And no, working in Coq (as Gonthier did) is quite different from punching holes into cardstock. CS loves abstraction, all those Type Theory based systems basically look like functional programming languages and are just as ergonomic. Because they are functional programming languages (with fancy-pants type systems).

And by denying those systems you then are also denying the value of category theory because there's some rather strong isomorphisms between those systems. Are you sure you want to throw all that maths out of the window just to be salty on the internet?

[-] kogasa@programming.dev 2 points 1 year ago

Are you saying that the proofs of the four colour theorem aren’t proofs, aren’t mathematics, or something to that effect?

No, and I can't see how a reasonable person would think I'm saying anything like that.

And no, working in Coq (as Gonthier did) is quite different from punching holes into cardstock.

I didn't say anything to that effect whatsoever.

CS loves abstraction, all those Type Theory based systems basically look like functional programming languages and are just as ergonomic. Because they are functional programming languages (with fancy-pants type systems). And by denying those systems you then are also denying the value of category theory because there’s some rather strong isomorphisms between those systems. Are you sure you want to throw all that maths out of the window just to be salty on the internet?

Who are you talking to? What is this fucking insanity?

[-] barsoap@lemm.ee 1 points 1 year ago* (last edited 1 year ago)

No, and I can’t see how a reasonable person would think I’m saying anything like that.

So now you do consider working in CoC to be a reasonable way to do maths. Thus, we have a contradiction. Thus, you make no sense. QED.

I didn’t say anything to that effect whatsoever.

You said "in binary". Peolpe haven't done that since the days of hand-punching things into cardstock. Which is, precisely, working in binary. Don't ask me whether a hole is a 0 or 1 I'm not from that era.

[-] kogasa@programming.dev 2 points 1 year ago

So now you do consider working in CoC to be a reasonable way to do maths. Thus, we have a contradiction. Thus, you make no sense. QED.

I never said otherwise you literal lunatic. I said it would be completely unreasonable to "just rewrite the whole thing in ~~rust~~ CoC." The vast majority of all math has literally nothing to do with nitpicky foundations issues.

You said “in binary”. Peolpe haven’t done that since the days of hand-punching things into cardstock. Which is, precisely, working in binary. Don’t ask me whether a hole is a 0 or 1 I’m not from that era.

I can't have this conversation with someone who can't read.

[-] barsoap@lemm.ee 1 points 1 year ago

The vast majority of all math has literally nothing to do with nitpicky foundations issues.

The whole of maths is built on it. The validity of everything hinges on the foundations. I'd even say that the beauty of everything depends on it but formalists are so caught up in their world that they don't even bother to look at constructive maths. The vast majority of maths already has been rewritten in constructive terms. It enabled proofs that were previously impossible.

Now your area of maths might be so arcane and special and everything that nothing whatsoever from the constructive side could ever amount anything but, frankly speaking, I fucking doubt it because you seem to be largely ignorant about the whole topic. For one, doing constructive maths doesn't mean a pre-occupation with "nitpicky foundation issues". Those foundations have been laid ages ago, it's been high-falutin from then on.

Have you watched that video I linked?

[-] uriel238 2 points 1 year ago

We have sorta the same problem with imaginary numbers, and I remember some programmable calculators can process complex numbers using symbolic representation (which happens to work similarly to Cartesian coordinates, so that's convenient)

But from what I remember any infinity bigger than counting numbers (say the set of real numbers) cannot be differentiated from each other, so we don't have established rules.

To be fair, I last tinkered with infinities in the aughts and then as a hobbyist. The Grand Hilbert Hotel can accomodate more compound infinities and still retain perfect utilization since the last time I visited.

[-] barsoap@lemm.ee 4 points 1 year ago* (last edited 1 year ago)

https://en.wikipedia.org/wiki/Continuum_hypothesis

Hmm. Frankly speaking I always assumed Mathematicians had more of an idea about infinities I mean why even have indices if you don't have an inductive rule to descr... oh wait never mind.

That said the reals aren't countable yet we have perfectly reasonable ways to deal with them symbolically, even compute with them, represent ~~every single one of~~^1^ them in finite space, it's just when you want to compare or output them with infinite precision that you might have to wait for eternity. But who needs infinite precision anyway, arbitrary precision is plenty.

^1^ On second thought, after diagonalisation, no we don't. Or we do because there's some magic going on with included transcendental constants that break through that do I look like a numerologist.

[-] silent_water@hexbear.net 1 points 1 year ago

there actually is a way to represent the reals with full generality in homotopy type theory -- work is still on-going to implement it in a real programming language/prove type checking is decidable, but the theory is already in place -- via Cauchy sequences.

[-] dat_math@hexbear.net 4 points 1 year ago

(which happens to work similarly to Cartesian coordinates, so that's convenient)

it's more than convenient, it's isomorphism!

[-] kogasa@programming.dev 3 points 1 year ago

I don't understand what you think the problem is. What do you mean infinities can't be differentiated from each other? Infinite cardinals are by definition equivalence classes of sets that can be placed in bijection with one another. You can compare one cardinal to another by showing there exists an injection from a representative set of the first one into a representative for the other. You can show equality by showing there is an injection both ways (Cantor-Schroder-Bernstein theorem) or otherwise producing a bijection explicitly. Infinite ordinals may as well be copies of the natural numbers indexed by infinite cardinals, so of course we can distinguish them too.

[-] uriel238 2 points 1 year ago

So far AFAIK we have two kinds of infinity: Those that can be accommodated at the Grand Hilbert (e.g. integers, fractions, etc.) and those that cannot (set of irrational numbers, set of curves, set of polytopes, etc.) This was why we had to differentiate orders of infinity, e.g. ℵ₀ (The Grand Hilbert set), ℵ₁ (the irrational set, the real set), ℵ₂ (???), ℵ₃ (?????), ℵₙ (??!??????????)

For values of infinity that are in higher orders than ℵ₀, we can only tell if they're equal to ℵ₁ or undetermined, which means their infinity size is ℵ₁ or greater, but still unknown.

Unless someone did some Nobel prize worthy work in mathematics that I haven't heard about which is quite possible.

[-] kogasa@programming.dev 3 points 1 year ago

No, that's definitely not true. As I said, infinite cardinals (like the cardinality of the naturals ℵ₀) are defined to be equivalence classes of sets that can be placed in bijection with one another. Whenever you have infinite sets that can't be placed in bijection, they represent different cardinals. The set of functions f : X --> X has cardinality 2^X too, so e.g. there are more real-valued functions of real numbers than there are real numbers. You can use this technique to get an infinite sequence of distinct cardinals (via Cantor's theorem, which has a simple constructive proof). And once you have all of those, you can take their (infinite) union to get yet another greater cardinal, and continue that way. There are in fact more cardinalities that can be obtained in this way than we could fit into a set-- the (infinite) number of infinite cardinals is too big to be an infinite cardinal.

You might be thinking of the generalized continuum hypothesis that says that there are no more cardinal numbers in between the cardinalities of power sets, i.e. that ℵ₁ = 2^(ℵ₀), ℵ₂ = 2^(ℵ₁), and so on.

[-] uriel238 1 points 1 year ago

It's quite possible that what I'm encountering is the the momentary failure to understand Cantor's theorem, or rather the mechanism it uses to enumerate the innumerable. So my math may just be old.

[-] kogasa@programming.dev 1 points 1 year ago

Cantor's theorem says the power set of X has a strictly larger cardinality than X.

When |X| is a natural number, the power set of X has cardinality 2^(|X|), since you can think of an element of the power set as a choice, for each element of X, of "in the subset" vs "not in the subset." Hence the notation 2^X for the power set of X.

Cantor's theorem applies to all sets, not just finite ones. You can show this with a simple argument. Let X be a set and suppose there is a bijection f : X -> 2^(X). Let D be the set { x in X : x is not in f(x) }. (The fact that this is well defined is given by the comprehension axiom of ZFC, so we aren't running into a Russell's paradox issue.) Since f is a bijection, there is an element y of X so that f(y) = D. Now either:

  • y is in D. But then by definition y is not in f(y) = D, a contradiction.

  • y is not in D. But then by definition, since y is not in f(y), y is in D.

Thus, there cannot exist such a bijection f, and |2^(X)| != |X|. It's easy enough to show that the inequality only goes one way, i.e. |2^(X)| > |X|.

[-] silent_water@hexbear.net 1 points 1 year ago

oh this is a neat argument I'd never encountered before. I was also under the impression that we hadn't proved there were infinities with cardinality greater than ℵ₁.

[-] kogasa@programming.dev 1 points 1 year ago

How/why would you simultaneously hold this belief and reference the HoTT book

[-] silent_water@hexbear.net 1 points 1 year ago
[-] kogasa@programming.dev 1 points 1 year ago

What are you doing with the HoTT book if you have never heard of Cantor's theorem??? You must understand there's a minimum of several years of intensive study in between these two things

[-] silent_water@hexbear.net 1 points 1 year ago

self-study. it's been a decade since I was in school and kept encountering references to it, so I've been working through a lecture series and the book.

[-] PipedLinkBot@feddit.rocks 2 points 1 year ago

Here is an alternative Piped link(s): https://piped.video/watch?v=21qPOReu4FI

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source, check me out at GitHub.

this post was submitted on 12 Aug 2023
253 points (100.0% liked)

196

16423 readers
1953 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 1 year ago
MODERATORS