1548
Critical thinking (slrpnk.net)
top 50 comments
sorted by: hot top controversial new old
[-] conditional_soup@lemm.ee 114 points 2 months ago

Idk, I think we're back to "it depends on how you use it". Once upon a time, the same was said of the internet in general, because people could just go online and copy and paste shit and share answers and stuff, but the Internet can also just be a really great educational resource in general. I think that using LLMs in non load-bearing "trust but verify" type roles (study buddies, brainstorming, very high level information searching) is actually really useful. One of my favorite uses of ChatGPT is when I have a concept so loose that I don't even know the right question to Google, I can just kind of chat with the LLM and potentially refine a narrower, more google-able subject.

[-] takeda@lemm.ee 138 points 2 months ago

trust but verify

The thing is that LLM is a professional bullshitter. It is actually trained to produce text that can fool ordinary person into thinking that it was produced by a human. The facts come 2nd.

[-] conditional_soup@lemm.ee 54 points 2 months ago

Yeah, I know. I use it for work in tech. If I encounter a novel (to me) problem and I don't even know where to start with how to attack the problem, the LLM can sometimes save me hours of googling by just describing my problem to it in a chat format, describing what I want to do, and asking if there's a commonly accepted approach or library for handling it. Sure, it sometimes hallucinate a library, but that's why I go and verify and read the docs myself instead of just blindly copying and pasting.

[-] lefaucet@slrpnk.net 35 points 2 months ago* (last edited 2 months ago)

That last step of verifying is often being skipped and is getting HARDER to do

The hallucinations spread like wildfire on the internet. Doesn't matter what's true; just what gets clicks that encourages more apparent "citations". Another even worse fertilizer of false citations is the desire to push false narratives by power-hungry bastards

AI rabbit holes are getting too deep to verify. It really is important to keep digital hallucinations out of the academic loop, especially for things with life-and-death consequences like medical school

load more comments (2 replies)
[-] Impleader@lemmy.world 25 points 2 months ago

I don’t trust LLMs for anything based on facts or complex reasoning. I’m a lawyer and any time I try asking an LLM a legal question, I get an answer ranging from “technically wrong/incomplete, but I can see how you got there” to “absolute fabrication.”

I actually think the best current use for LLMs is for itinerary planning and organizing thoughts. They’re pretty good at creating coherent, logical schedules based on sets of simple criteria as well as making communications more succinct (although still not perfect).

[-] takeda@lemm.ee 6 points 2 months ago

Sadly, the best use case for LLM is to pretend to be a human on social media and influence their opinion.

Musk accidentally showed that's what they are actually using AI for, by having Grok inject disinformation about South Africa.

load more comments (5 replies)
load more comments (8 replies)
[-] TowardsTheFuture@lemmy.zip 21 points 2 months ago

And just as back then, the problem is not with people using something to actually learn and deepen their understanding. It is with people blatantly cheating and knowing nothing because they don’t even read the thing they’re copying down.

[-] TheTechnician27@lemmy.world 20 points 2 months ago* (last edited 2 months ago)

Something I think you neglect in this comment is that yes, you're using LLMs in a responsible way. However, this doesn't translate well to school. The objective of homework isn't just to reproduce the correct answer. It isn't even to reproduce the steps to the correct answer. It's for you to learn the steps to the correct answer (and possibly the correct answer itself), and the reproduction of those steps is a "proof" to your teacher/professor that you put in the effort to do so. This way you have the foundation to learn other things as they come up in life.

For instance, if I'm in a class learning to read latitude and longitude, the teacher can give me an assignment to find 64° 8′ 55.03″ N, 21° 56′ 8.99″ W on the map and write where it is. If I want, I can just copy-paste that into OpenStreetMap right now and see what horrors await, but to actually learn, I need to manually track down where that is on the map. Because I learned to use latitude and longitude as a kid, I can verify what the computer is telling me, and I can imagine in my head roughly where that coordinate is without a map in front of me.

Learning without cheating lets you develop a good understanding of what you: 1) need to memorize, 2) don't need to memorize because you can reproduce it from other things you know, and 3) should just rely on an outside reference work for whenever you need it.

There's nuance to this, of course. Say, for example, that you cheat to find an answer because you just don't understand the problem, but afterward, you set aside the time to figure out how that answer came about so you can reproduce it yourself. That's still, in my opinion, a robust way to learn. But that kind of learning also requires very strict discipline.

load more comments (2 replies)
[-] adeoxymus@lemmy.world 8 points 2 months ago

To add to this, how you evaluate the students matters as well. If the evaluation can be too easily bypassed by making ChatGPT do it, I would suggest changing the evaluation method.

Imo a good method, although demanding for the tutor, is oral examination (maybe in combination with a written part). It allows you to verify that the student knows the stuff and understood the material. This worked well in my studies (a science degree), not so sure if it works for all degrees?

[-] UnderpantsWeevil@lemmy.world 5 points 2 months ago

I might add that a lot of the college experience (particularly pre-med and early med school) is less about education than a kind of academic hazing. Students assigned enormous amounts of debt, crushing volumes of work, and put into pools of students beyond which only X% of the class can move forward on any terms (because the higher tier classes don't have the academic staff / resources to train a full freshman class of aspiring doctors).

When you put a large group of people in a high stakes, high work, high competition environment, some number of people are going to be inclined to cut corners. Weeding out people who "cheat" seems premature if you haven't addressed the large incentives to cheat, first.

load more comments (13 replies)
load more comments (1 replies)
[-] Jankatarch@lemmy.world 41 points 2 months ago

Only topic I am close-minded and strict about.

If you need to cheat as a highschooler or younger there is something else going wrong, focus on that.

And if you are an undergrad or higher you should be better than AI already. Unless you cheated on important stuff before.

[-] sneekee_snek_17@lemmy.world 28 points 2 months ago

This is my stance exactly. ChatGPT CANNOT say what I want to say, how i want to say it, in a logical and factually accurate way without me having to just rewrite the whole thing myself.

There isn't enough research about mercury bioaccumulation in the Great Smoky Mountains National Park for it to actually say anything of substance.

I know being a non-traditional student massively affects my perspective, but like, if you don't want to learn about the precise thing your major is about...... WHY ARE YOU HERE

load more comments (5 replies)
[-] McDropout@lemmy.world 35 points 2 months ago

It’s funny how everyone is against using AI for students to get summaries of texts, pdfs etc which I totally get.

But during my time through medschool, I never got my exam paper back (ever!) so the exam was a test where I needed to prove that I have enough knowledge but the exam is also allowed to show me my weaknesses are so I would work on them but no, we never get out papers back. And this extends beyond medschool, exams like the USMLE are long and tiring at the end of the day we just want a pass, another hurdle to jump on.

We criticize students a lot (righfully so) but we don’t criticize the system where students only study becase there is an exam, not because they are particularly interested in the topic at given hand.

A lot of topics that I found interesting in medicine were dropped off because I had to sit for other examinations.

load more comments (3 replies)
[-] disguy_ovahea@lemmy.world 29 points 2 months ago

Even more concerning, their dependance on AI will carry over into their professional lives, effectively training our software replacements.

[-] kibiz0r@midwest.social 6 points 2 months ago

While eroding the body of actual practitioners that are necessary to train the thing properly in the first place.

It’s not simply that the bots will take your job. It that was all, I wouldn’t really see that as a problem with AI so much as a problem with using employment to allocate life-sustaining resources.

But if we’re willingly training ourselves to remix old solutions to old problems instead of learning the reasoning behind those solutions, we’ll have a hard time making big, non-incremental changes to form new solutions for new problems.

It’s a really bad strategy for a generation that absolutely must solve climate change or perish.

[-] Numuruzero@lemmy.dbzer0.com 24 points 2 months ago

The issue as I see it is that college is a barometer for success in life, which for the sake of brevity I'll just say means economic success. It's not just a place of learning, it's the barrier to entry - and any metric that becomes a goal is prone to corruption.

A student won't necessarily think of using AI as cheating themselves out of an education because we don't teach the value of education except as a tool for economic success.

If the tool is education, the barrier to success is college, and the actual goal is to be economically successful, why wouldn't a student start using a tool that breaks open that barrier with as little effort as possible?

[-] Zink@programming.dev 7 points 2 months ago

especially in a world that seems to be repeatedly demonstrating to us that cheating and scumbaggery are the path to the highest echelons of success.

..where “success” means money and power - the stuff that these high profile scumbags care about, and the stuff that many otherwise decent people are taught should be the priority in their life.

[-] PillowTalk420@lemmy.world 24 points 2 months ago* (last edited 2 months ago)

Even setting aside all of those things, the whole point of school is that you learn how to do shit; not pass it off to someone or something else to do for you.

If you are just gonna use AI to do your job, why should I hire you instead of using AI myself?

[-] digdilem@lemmy.ml 17 points 2 months ago

I went to school in the 1980s. That was the time that calculators were first used in class and there was a similar outcry about how children shouldn't be allowed to use them, that they should use mental arithmetic or even abacuses.

Sounds pretty ridiculous now, and I think this current problem will sound just as silly in 10 or 20 years.

[-] PillowTalk420@lemmy.world 10 points 2 months ago* (last edited 2 months ago)

lol I remember my teachers always saying "you won't always have a calculator on you" in the 90's and even then I had one of those calculator wrist watches from Casio.

And I still suck at math without one so they kinda had a point, they just didn't make it very well.

load more comments (2 replies)
[-] NateNate60@lemmy.world 6 points 2 months ago

It was a bad argument but the sentiment behind it was correct and is the same as the reasoning why students shouldn't be allowed to just ask AI for everything. The calculator can tell you the results of sums and products but if you need to pull out a calculator because you never learned how to solve problems like calculating the total cost of four loaves of bread that cost $2.99 each, that puts you at rather a disadvantage compared to someone who actually paid attention in class. For mental arithmetic in particular, after some time, you get used to doing it and you become faster than the calculator. I can calculate the answer to the bread problem in my head before anyone can even bring up the calculator app on their phone, and I reckon most of you who are reading this can as well.

I can't predict the future, but while AIs are not bad at telling you the answer, at this point in time, they are still very bad at applying the information at hand to make decisions based on complex and human variables. At least for now, AIs only know what they're told and cannot actually reason very well. Let me provide an example:

I provided the following prompt to Microsoft Copilot (I am slacking off at work and all other AIs are banned so this is what I have access to):

Suppose myself and a friend, who is a blackjack dealer, are playing a simple guessing game using the cards from the shoe. The game works thusly: my friend deals me two cards face up, and then I have to bet on what the next card will be.

The game begins and my friend deals the first card, which is the ace of spades. He deals the second card, which is the ace of clubs. My friend offers a bet that pays 100 to 1 if I wager that the next card after these two is a black ace. Should I take the bet?

Screenshot of Microsoft Copilot saying this is a bad bet because there are no black aces left in the shoe

Any human who knows what a blackjack shoe is (a card dispenser which contains six or more decks of cards shuffled together and in completely random order) would know this is a good bet. But the AI doesn't.

The AI still doesn't get it even if I hint that this is a standard blackjack shoe (and thus contains at least six decks of cards):

Suppose myself and a friend are playing a simple guessing game using the cards from a standard blackjack shoe obtained from a casino. The game works thusly: my friend deals me two cards face up, and then I have to bet on what the next card will be.

The game begins and my friend deals the first card, which is the ace of spades. He deals the second card, which is the ace of clubs. My friend offers a bet that pays 100 to 1 if I wager that the next card after these two is a black ace. Should I take the bet?

Screenshot of AI that figured out the shoe contained at least six decks but still advised against taking the bet

load more comments (2 replies)
[-] potentiallynotfelix@lemmy.fish 6 points 2 months ago

I see your point, but calculators(good ones, at least) are accurate 100% of the time. AI can hallucinate, and in a medical settings it is crucial that it doesn't. I use AI for some insignificant tasks but I would not want it to replace my doctor's learning.

Also, calculators are used to help kids work faster, not to do their work for them. Classroom calculators(the ones my schools had, at least) didn't solve algebraic equations, they just added, subtracted, multiplied, divided, exponentiated, rooted, etc. Those are all things that can be done manually but are rudimentary and slow.

I get your point but AI and calculators are not quite the same.

load more comments (2 replies)
load more comments (1 replies)
load more comments (5 replies)
[-] Seigest@lemmy.ca 15 points 2 months ago

How people think I use AI "Please write my essay and cite your sources."

How I use it
"please make my autistic word slop that I wrote already into something readable for the nerotypical folk, use simple words, make it tonally neutral. stop using emdashes, headers, and list and don't mess with the quotes"

[-] MystikIncarnate@lemmy.ca 15 points 2 months ago

I've said it before and I'll say it again. The only thing AI can, or should be used for in the current era, is templating... I suppose things that don't require truth or accuracy are fine too, but yeah.

You can build the framework of an article, report, story, publication, assignment, etc using AI to get some words on paper to start from. Every fact, declaration, or reference needs to be handled as false information unless otherwise proven, and most of the work will need to be rewritten. It's there to provide, more or less, a structure to start from and you do the rest.

When I did essays and the like in school, I didn't have AI to lean on, and the hardest part of doing any essay was.... How the fuck do I start this thing? I knew what I wanted to say, I knew how I wanted to say it, but the initial declarations and wording to "break the ice" so-to-speak, always gave me issues.

It's shit like that where AI can help.

Take everything AI gives you with a gigantic asterisk, that any/all information is liable to be false. Do your own research.

Given how fast things are moving in terms of knowledge and developments in science, technology, medicine, etc that's transforming how we work, now, more than ever before, what you know is less important than what you can figure out. That's what the youth need to be taught, how to figure that shit out for themselves, do the research and verify your findings. Once you know how to do that, then you'll be able to adapt to almost any job that you can comprehend from a high level, it's just a matter of time patience, research and learning. With that being said, some occupations have little to no margin for error, which is where my thought process inverts. Train long and hard before you start doing the job.... Stuff like doctors, who can literally kill patients if they don't know what they don't know.... Or nuclear power plant techs... Stuff like that.

[-] GoofSchmoofer@lemmy.world 28 points 2 months ago* (last edited 2 months ago)

When I did essays and the like in school, I didn’t have AI to lean on, and the hardest part of doing any essay was… How the fuck do I start this thing?

I think that this is a big part of education and learning though. When you have to stare at a blank screen (or paper) and wonder "How the fuck do I start?" Having to brainstorm write shit down 50 times, edit, delete, start over. I think that process alone makes you appreciate good writing and how difficult it can be.

My opinion is that when you skip that step you skip a big part of the creative process.

[-] Retrograde@lemmy.world 7 points 2 months ago* (last edited 2 months ago)

If not arguably the biggest part of the creative process, the foundational structure that is

load more comments (4 replies)
load more comments (1 replies)
[-] ianfraserkrillmaster@midwest.social 14 points 2 months ago

galileosballs is the last screw holding the house together i swear

[-] TankovayaDiviziya@lemmy.world 14 points 2 months ago

This reasoning applies to everything, like the tariff rates that the Trump admin imposed to each countries and places is very likely based from the response from Chat GPT.

Gotta say, if someone gets through medical school with AI, we're fucked.

load more comments (1 replies)
[-] jsomae@lemmy.ml 9 points 2 months ago

Okay but I use AI with great concern for truth, evidence, and verification. In fact, I think it has sharpened my ability to double-check things.

My philosophy: use AI in situations where a high error-rate is tolerable, or if it's easier to validate an answer than to posit one.

There is a much better reason not to use AI -- it weakens one's ability to posit an answer to a query in the first place. It's hard to think critically if you're not thinking at all to begin with.

load more comments (1 replies)
[-] Wilco@lemm.ee 8 points 2 months ago

Wow, people hate AI! This post has a lot of upvotes.

[-] NocturnalEngineer@lemmy.world 24 points 2 months ago

I don't hate all AI, it certainly has its uses in selected applications when used correctly...

What I hate is the massive push from big tech to force it into every possible crevice regardless of suitability, the sheer amount of AI slop it's generating, the social media manipulation spinning it as a positive, the massive invasion of privacy they demand to use their services, the blatant copyright infringement underpinning it all, and the vast amounts of energy & resources it consumes.

People forget LLMs are just statistical models. They have no factual understanding on they're producing. So why should we be allowing it in an educational context?

load more comments (1 replies)
[-] boolean_sledgehammer@lemmy.world 10 points 2 months ago

I personally don't "hate" it. I am, however, realistic about its capabilities. A lot of people think that LLMs can be used as a substitute for thinking.

That, any way you look at it, is a problem with severe implications.

load more comments (1 replies)
load more comments (2 replies)
[-] eugenevdebs@lemmy.dbzer0.com 7 points 2 months ago

My hot take on students graduating college using AI is this: if a subject can be passed using ChatGPT, then it's a trash subject. If a whole course can be passed using ChatGPT, then it's a trash course.

It's not that difficult to put together a course that cannot be completed using AI. All you need is to give a sh!t about the subject you're teaching. What if the teacher, instead of assignments, had everyone sit down at the end of the semester in a room, and had them put together the essay on the spot, based on what they've learned so far? No phones, no internet, just the paper, pencil, and you. Those using ChatGPT will never pass that course.

As damaging as AI can be, I think it also exposes a lot of systemic issues with education. Students feeling the need to complete assignments using AI could do so for a number of reasons:

  • students feel like the task is pointless busywork, in which case a) they are correct, or b) the teacher did not properly explain the task's benefit to them.

  • students just aren't interested in learning, either because a) the subject is pointless filler (I've been there before), or b) the course is badly designed, to the point where even a rote algorithm can complete it, or c) said students shouldn't be in college in the first place.

Higher education should be a place of learning for those who want to further their knowledge, profession, and so on. However, right now college is treated as this mandatory rite of passage to the world of work for most people. It doesn't matter how meaningless the course, or how little you've actually learned, for many people having a degree is absolutely necessary to find a job. I think that's bullcrap.

If you don't want students graduating with ChatGPT, then design your courses properly, cut the filler from the curriculum, and make sure only those are enrolled who are actually interested in what is being taught.

[-] BigPotato@lemmy.world 6 points 2 months ago

Your 'design courses properly' loses all steam when you realize there has to be an intro level course to everything. Show me math that a computer can't do but a human can. Show me a famous poem that doesn't have pages of literary critique written about it. "Oh, if your course involves Shakespeare it's obviously trash."

The "AI" is trained on human writing, of course it can find a C average answer to a question about a degree. A fucking degree doesn't need to be based on cutting edge research - you need a standard to grade something on anyway. You don't know things until you learn them and not everyone learns the same things at the same time. Of course an AI trained on all written works within... the Internet is going to be able to pass an intro level course. Or do we just start students with a capstone in theoretical physics?

load more comments (2 replies)
load more comments (3 replies)
[-] obinice@lemmy.world 6 points 2 months ago

We weren't verifying things with our own eyes before AI came along either, we were reading Wikipedia, text books, journals, attending lectures, etc, and accepting what we were told as facts (through the lens of critical thinking and applying what we're told as best we can against other hopefully true facts, etc etc).

I'm a Relaxed Empiricist, I suppose :P Bill Bailey knew what he was talking about.

[-] ABC123itsEASY@lemmy.world 7 points 2 months ago

You never took a lab science course? Or wrote a proof in math?

[-] captain_aggravated@sh.itjust.works 7 points 2 months ago

In my experience, "writing a proof in math" was an exercise in rote memorization. They didn't try to teach us how any of it worked, just "Write this down. You will have to write it down just like this on the test." Might as well have been a recipe for custard.

load more comments (6 replies)
load more comments (2 replies)
load more comments (1 replies)
[-] Tabooki@lemm.ee 6 points 2 months ago* (last edited 2 months ago)

Did the same apply when calculators came out? Or the Internet?

[-] ABC123itsEASY@lemmy.world 11 points 2 months ago* (last edited 2 months ago)

Except calculators are based on reality and have deterministic and reliable results lol

Edit: holy crap I would never have guessed this statement would make people wanna argue with me. I've never felt that my job is secure from the next generation more than I do now.

load more comments (13 replies)
[-] dutchkimble@lemy.lol 6 points 2 months ago

So it’s ok for political science degrees then?

load more comments (2 replies)
[-] andybytes@programming.dev 6 points 2 months ago

I'm a slow learner, but I still want to learn.

[-] ArtemisimetrA@lemm.ee 5 points 2 months ago

I literally just can't wrap my AuDHD brain around professional formatting. I'll probably use AI to take the paper I wrote while ignoring archaic and pointless rules about formatting and force it into APA or whatever. Feels fine to me, but I'm but going to have it write the actual paper or anything.

load more comments (1 replies)
[-] detun3d@lemm.ee 5 points 2 months ago

Yes! Preach!

load more comments
view more: next ›
this post was submitted on 19 May 2025
1548 points (100.0% liked)

Microblog Memes

8757 readers
1109 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS