33

A new law went into effect in New York City on Wednesday that requires any business using A.I. in its hiring to submit that software to an audit to prove that it does not result in racist or sexist…

you are viewing a single comment's thread
view the rest of the comments
[-] discosage@kbin.social 5 points 1 year ago

Isn't this impossible without ensuring a completely "not racist" sample ( which itself would be impossible)? Like the article pointed to a health app that did not explicitly identify race, but still acted in a racist manner due to implicit bias in whatever the AI was trained on.

I mean I guess the point is to ban AI in hiring which isn't a bad idea.

AI is more than just machine learning. A hand-crafted AI can be shown to be free of illegal biases, for example. The border between traditional AI and nested if statements is pretty vague in that sense.

For automatically trained AI, things are harder to prove. For neural networks like ChatGPT, the technology to train AI is decades ahead of the technology to reason about the state of the trained AI.

This law doesn't ban all AI, but it does ban most of the implementations out there, and for good reason.

[-] conciselyverbose@kbin.social 4 points 1 year ago

A hand-crafted AI can be shown to be free of illegal biases, for example.

Not really. There are the exact same issues with potential proxy identifiers black boxes have.

[-] admiralteal@kbin.social 4 points 1 year ago* (last edited 1 year ago)

Really just getting to the actual thing for more people to understand: none of these "AIs" are intelligent at all, and calling them artificial "intelligence" is completely misleading and causes people to make very dumb assumptions about how they work.

"Traditional" AI is just writing models to quantify things and then weighting decision-making based on those metrics. It's just playing Guess Who? with outcomes. "Oh, we can get rid of all the candidates we don't want by asking if they have a mustache and filtering out the ones who don't." Decisions like that, over and over. As you said, nested if statements.

Neural Networks are just doing the same thing, but without the authors actually making decisions about those intermediate questions. You feed dependent variables in along with lists of covariates and let an algorithm randomly stumble and guess around with them until it comes up with the relative quantification itself. This means you don't KNOW if it is including discrimination in its process.

Which is why laws like this are necessary. If you don't know how the "AI" model is arriving at its outcomes, you don't know if it is discriminating. You have to be able to audit it. Because my "traditional" AI example, you can see it's likely discriminating against women, but with all the hidden layers and complex relationships in the NN you won't know it did the same until decades later.

this post was submitted on 08 Jul 2023
33 points (100.0% liked)

Technology

124 readers
1 users here now

This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!

founded 2 years ago