84
submitted 6 days ago by chobeat@lemmy.ml to c/technology@lemmy.ml
top 17 comments
sorted by: hot top controversial new old
[-] tfowinder@beehaw.org 12 points 6 days ago

LLM just mirror real world data they are trained on,

Other than censorship i don't think there is a way to make it stop. It doesn't understand moral good or bad it just spits out what it was trained on.

[-] Alsjemenou@lemy.nl 11 points 6 days ago

women who ask chatgtp for financial advice should make less money.

[-] themurphy@lemmy.ml 6 points 6 days ago

You mean *people. Even though I might not agree. ChatGPT is better at financial advice than alot of people. Just dont ask it how to become rich. Because you cant.

[-] hoshikarakitaridia@lemmy.world 9 points 6 days ago

Now I really wanna know if that's actually the best advice or sexism. Because I could see that our society might be so bad that this is genuinely good advice.

[-] JumpyWombat@lemmy.ml 35 points 6 days ago

It’s neither. LLMs are statistical models: if the training material contains bias (women get lower salaries) the output will reflect that bias.

[-] Tenderizer78@lemmy.ml 7 points 6 days ago

That's not the question.

It wasn't about whether the LLM was well reasoned, it was about whether the conclusion was (pragmatically speaking) correct.

[-] JumpyWombat@lemmy.ml 5 points 5 days ago

LLMs do not give the correct answer, just the most probable sequence of words based on the training.

That kind of studies (because there are hundreds) highlight two things:

1- LLMs could be incorrect, biased, or give fake information (the so called hallucinations). 2- the previous point stems from the training material proving the existence of bias in the society.

In other words, having an LLM recommending lower salaries for women is a proof that there is a gender gap.

[-] Tenderizer78@lemmy.ml 5 points 5 days ago

Again, that wasn't the original question.

The question was about whether women are genuinely more likely to be passed over for a job offer if they ask for as much pay as a man would ask for, or if (as you described), or both. A broken clock is right twice a day, and it's missing the point of the question if you go and explain why you can't rely on said broken clock.

Are hiring managers actually less likely to hire women if they ask for market-rate pay, as opposed to men when they do the same?

[-] JumpyWombat@lemmy.ml 1 points 5 days ago* (last edited 5 days ago)

Are hiring managers actually less likely to hire women if they ask for market-rate pay, as opposed to men when they do the same?

If instead of giving passive aggressive replies you would spend a moment to reflect on what I wrote you would understand that ChatGPT reflect the reality, including any bias. In short the answer is yes with high probability.

[-] Melvin_Ferd@lemmy.world 2 points 6 days ago

I can't believe we should ever say this. No, the chat machine is the problem.

[-] JumpyWombat@lemmy.ml 8 points 6 days ago

The problem is to use LLMs for the wrong things expecting correct answers.

[-] Melvin_Ferd@lemmy.world 1 points 6 days ago

Absolutely, so who is building a study that uses it for the wrong thing and then publishing articles about it

[-] JumpyWombat@lemmy.ml 9 points 6 days ago

The study wanted to highlight the bias, not to recommend ChatGTP’s advice

[-] SheeEttin@lemmy.zip 4 points 6 days ago

The chat machine is just a tool. The problem is that a pay disparity exists in society, and that they did not account for this bias in the training data.

[-] Melvin_Ferd@lemmy.world 3 points 6 days ago* (last edited 6 days ago)

That disparity gets very narrow when you account for men and women with similar roles, education, time in career and industry.

Much of that pay disparity is from the jobs we pick. There are also social pressure where a man's value is determined by how much he makes. Guys are more willing to do dangerous or demanding jobs that pay more.

The AI is still just reflecting the bias

[-] match@pawb.social 3 points 6 days ago

the society is also the problem

[-] Core_of_Arden@lemmy.ml 2 points 5 days ago

And the study should also mention, that the LLMs don't do anything by themselves, they do what they are trained to do... noting more. They are just machines.

this post was submitted on 22 Jul 2025
84 points (100.0% liked)

Technology

39117 readers
20 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 6 years ago
MODERATORS