891

Does AI actually help students learn? A recent experiment in a high school provides a cautionary tale. 

Researchers at the University of Pennsylvania found that Turkish high school students who had access to ChatGPT while doing practice math problems did worse on a math test compared with students who didn’t have access to ChatGPT. Those with ChatGPT solved 48 percent more of the practice problems correctly, but they ultimately scored 17 percent worse on a test of the topic that the students were learning.

A third group of students had access to a revised version of ChatGPT that functioned more like a tutor. This chatbot was programmed to provide hints without directly divulging the answer. The students who used it did spectacularly better on the practice problems, solving 127 percent more of them correctly compared with students who did their practice work without any high-tech aids. But on a test afterwards, these AI-tutored students did no better. Students who just did their practice problems the old fashioned way — on their own — matched their test scores.

top 50 comments
sorted by: hot top controversial new old
[-] N0body@lemmy.dbzer0.com 197 points 2 months ago

Traditional instruction gave the same result as a bleeding edge ChatGPT tutorial bot. Imagine what would happen if a tiny fraction of the billions spent to develop this technology went into funding improved traditional instruction.

Better paid teachers, better resources, studies geared at optimizing traditional instruction, etc.

Move fast and break things was always a stupid goal. Turbocharging it with all this money is killing the tried and true options that actually produce results, while straining the power grid and worsening global warming.

[-] TallonMetroid@lemmy.world 109 points 2 months ago

Investing in actual education infrastructure won't get VC techbros their yachts, though.

[-] elvith@feddit.org 47 points 2 months ago

It’s the other way round: Education makes for less gullible people and for workers that demand more rights more freely and easily - and then those are coming for their yachts…

[-] Petter1@lemm.ee 17 points 2 months ago

Imagine all the money spent on war would be invested into education 🫣what a beautiful world we would live in.

[-] otp@sh.itjust.works 17 points 2 months ago

Traditional instruction gave the same result as a bleeding edge ChatGPT tutorial bot.

Interesting way of looking at it. I disagree with your conclusion about the study, though.

It seems like the AI tool would be helpful for things like assignments rather than tests. I think it's intellectually dishonest to ignore the gains in some environments because it doesn't have gains in others.

You're also comparing a young technology to methods that have been adapted over hundreds of thousands of years. Was the first automobile entirely superior to every horse?

I get that some people just hate AI because it's AI. For the people interested in nuance, I think this study is interesting. I think other studies will seek to build on it.

load more comments (2 replies)
load more comments (6 replies)
[-] 2ugly2live@lemmy.world 65 points 2 months ago

I don't even know of this is ChatGPT's fault. This would be the same outcome if someone just gave them the answers to a study packet. Yes, they'll have the answers because someone (or something) gave it to them, but won't know how to get that answer without teaching them. Surprise: For kids to learn, they need to be taught. Shocker.

[-] Buddahriffic@lemmy.world 9 points 2 months ago

I've found chatGPT to be a great learning aid. You just don't use it to jump straight to the answers, you use it to explore the gaps and edges of what you know or understand. Add context and details, not final answers.

[-] IzzyScissor@lemmy.world 12 points 2 months ago

The study shows that once you remove the LLM though, the benefit disappears. If you rely on an LLM to help break things down or add context and details, you don't learn those skills on your own.

I used it to learn some coding, but without using it again, I couldn't replicate my own code. It's a struggle, but I don't think using it as a teaching aid is a good idea yet, maybe ever.

load more comments (2 replies)
[-] kusivittula@sopuli.xyz 59 points 2 months ago
[-] LifeInMultipleChoice@lemmy.world 24 points 2 months ago

"tests designed for use by people who don't use chatgpt is performed by people who don't"

This is the same fn calculator argument we had 20 years ago.

A tool is a tool. It will come in handy, but if it will be there in life, then it's a dumb test

[-] conciselyverbose@sh.itjust.works 43 points 2 months ago

The point of learning isn't just access to that information later. That basic understanding gets built on all the way up through the end of your education, and is the base to all sorts of real world application.

There's no overlap at all between people who can't pass a test without an LLM and people who understand the material.

load more comments (2 replies)
[-] bluewing@lemm.ee 11 points 2 months ago

As someone who has taught math to students in a classroom, unless you have at least a basic understanding of HOW the numbers are supposed to work, the tool - a calculator - is useless. While getting the correct answer is important, I was more concerned with HOW you got that answer. Because if you know how you got that answer, then your ability to get the correct answer skyrockets.

Because doing it your way leads to blindly relying on AI and believing those answers are always right. Because it's just a tool right?

load more comments (1 replies)
[-] blazeknave@lemmy.world 58 points 2 months ago

Kids who take shortcuts and don't learn suck at recalling knowledge they never had..

[-] ameancow@lemmy.world 24 points 2 months ago* (last edited 2 months ago)

The only reason we're trying to somehow compromise and allow or even incorporate cheating software into student education is because the tech-bros and singularity cultists have been hyping this technology like it's the new, unstoppable force of nature that is going to wash over all things and bring about the new Golden Age of humanity as none of us have to work ever again.

Meanwhile, 80% of AI startups sink and something like 75% of the "new techs" like AI drive-thru orders and AI phone support go to call centers in India and Philippines. The only thing we seem to have gotten is the absolute rotting destruction of all content on the internet and children growing up thinking it's normal to consume this watered-down, plagiarized, worthless content.

load more comments (1 replies)
[-] ChickenLadyLovesLife@lemmy.world 12 points 2 months ago

I took German in high school and cheated by inventing my own runic script. I would draw elaborate fantasy/sci-fi drawings on the covers of my notebooks with the German verb declensions and whatnot written all over monoliths or knight's armor or dueling spaceships, using my own script instead of regular characters, and then have these notebook sitting on my desk while taking the tests. I got 100% on every test and now the only German I can speak is the bullshit I remember Nightcrawler from the X-Men saying. Unglaublich!

load more comments (4 replies)
load more comments (2 replies)
[-] Insig@lemmy.world 50 points 2 months ago

At work we give a 16/17 year old, work experience over the summer. He was using chatgpt and not understanding the code that was outputing.

I his last week he asked why he doing print statement something like

print (f"message {thing} ")

[-] aniki@discuss.tchncs.de 9 points 2 months ago

Sounds like operator error because he could have asked chatGPT and gotten the correct answer about python f strings...

[-] ulterno@lemmy.kde.social 9 points 2 months ago

Students first need to learn to:

  1. Break down the line of code, then
  2. Ask the right questions

The student in question probably didn't develop the mental faculties required to think, "Hmm... what the 'f'?"

A similar thingy happened to me having to teach a BTech grad with 2 years of prior exp. At first, I found it hard to believe how someone couldn't ask such questions from themselves, by themselves. I am repeatedly dumbfounded at how someone manages to be so ignorant of something they are typing and recently realising (after interaction with multiple such people) that this is actually the norm^[and that I am the weirdo for trying hard and visualising the C++ abstract machine in my mind].

load more comments (2 replies)
load more comments (1 replies)
load more comments (2 replies)
[-] Eiri@lemmy.ca 37 points 2 months ago
load more comments (15 replies)
[-] maegul@lemmy.ml 34 points 2 months ago

Yea, this highlights a fundamental tension I think: sometimes, perhaps oftentimes, the point of doing something is the doing itself, not the result.

Tech is hyper focused on removing the "doing" and reproducing the result. Now that it's trying to put itself into the "thinking" part of human work, this tension is making itself unavoidable.

I think we can all take it as a given that we don't want to hand total control to machines, simply because of accountability issues. Which means we want a human "in the loop" to ensure things stay sensible. But the ability of that human to keep things sensible requires skills, experience and insight. And all of the focus our education system now has on grades and certificates has lead us astray into thinking that the practice and experience doesn't mean that much. In a way the labour market and employers are relevant here in their insistence on experience (to the point of absurdity sometimes).

Bottom line is that we humans are doing machines, and we learn through practice and experience, in ways I suspect much closer to building intuitions. Being stuck on a problem, being confused and getting things wrong are all part of this experience. Making it easier to get the right answer is not making education better. LLMs likely have no good role to play in education and I wouldn't be surprised if banning them outright in what may become a harshly fought battle isn't too far away.

All that being said, I also think LLMs raise questions about what it is we're doing with our education and tests and whether the simple response to their existence is to conclude that anything an LLM can easily do well isn't worth assessing. Of course, as I've said above, that's likely manifestly rubbish ... building up an intelligent and capable human likely requires getting them to do things an LLM could easily do. But the question still stands I think about whether we need to also find a way to focus more on the less mechanical parts of human intelligence and education.

load more comments (2 replies)
[-] Soup@lemmy.cafe 31 points 2 months ago

Kids using an AI system trained on edgelord Reddit posts aren’t doing well on tests?

Ya don’t say.

[-] flerp@lemm.ee 28 points 2 months ago

Like any tool, it depends how you use it. I have been learning a lot of math recently and have been chatting with AI to increase my understanding of the concepts. There are times when the textbook shows some steps that I don't understand why they're happening and I've questioned AI about it. Sometimes it takes a few tries of asking until you figure out the right question to ask to get the right answer you need, but that process of thinking helps you along the way anyways by crystallizing in your brain what exactly it is that you don't understand.

I have found it to be a very helpful tool in my educational path. However I am learning things because I want to understand them, not because I have to pass a test and that determination in me to want to understand is a big difference. Just getting hints to help you solve the problem might not really help in the long run, but it you're actually curious about what you're learning and focus on getting a deeper understanding of why and how something works rather than just getting the right answer, it can be a very useful tool.

[-] Rekorse@sh.itjust.works 20 points 2 months ago

Why are you so confident that the things you are learning from AI are correct? Are you just using it to gather other sources to review by hand or are you trying to have conversations with the AI?

We've all seen AI get the correct answer but the show your work part is nonsense, or vice versa. How do you verify what AI outputs to you?

[-] GaMEChld@lemmy.world 8 points 2 months ago

You check it's work. I used it to calculate efficiency in a factory game and went through and made corrections to inconsistencies I spotted. Always check it's work.

load more comments (1 replies)
load more comments (13 replies)
load more comments (1 replies)
[-] glowie@h4x0r.host 27 points 2 months ago

Of all the students in the world, they pick ones from a "Turkish high school". Any clear indication why there of all places when conducted by a US university?

[-] catloaf@lemm.ee 17 points 2 months ago

I'm guessing there was a previous connection with some of the study authors.

I skimmed the paper, and I didn't see it mention language. I'd be more interested to know if they were using ChatGPT in English or Turkish, and how that would affect performance, since I assume the model is trained on significantly more English language data than Turkish.

load more comments (1 replies)
[-] Lemminary@lemmy.world 10 points 2 months ago

If I had access to ChatGPT during my college years and it helped me parse things I didn't fully understand from the texts or provided much-needed context for what I was studying, I would've done much better having integrated my learning. That's one of the areas where ChatGPT shines. I only got there on my way out. But math problems? Ugh.

[-] ForgotAboutDre@lemmy.world 23 points 2 months ago

When you automate these processes you lose the experience. I wouldn’t be surprised if you couldn’t parse information as well as you can now, if you had access to chat GPT.

It’s had to get better at solving your problems if something else does it for you.

Also the reliability of these systems is poor, and they’re specifically trained to produce output that appears correct. Not actually is correct.

load more comments (3 replies)
load more comments (4 replies)
[-] Vanth@reddthat.com 26 points 2 months ago

I'm not entirely sold on the argument I lay out here, but this is where I would start were I to defend using chatGPT in school as they laid out in their experiment.

It's a tool. Just like a calculator. If a kid learns and does all their homework with a calculator, then suddenly it's taken away for a test, of course they will do poorly. Contrary to what we were warned about as kids though, each of us does carry a calculator around in our pocket at nearly all times.

We're not far off from having an AI assistant with us 24/7 is feasible. Why not teach kids to use the tools they will have in their pocket for the rest of their lives?

[-] filister@lemmy.world 19 points 2 months ago

I think here you also need to teach your kid not to trust unconditionally this tool and to question the quality of the tool. As well as teaching it how to write better prompts, this is the same like with Google, if you put shitty queries you will get subpar results.

And believe me I have seen plenty of tech people asking the most lame prompts.

load more comments (3 replies)
[-] Schal330@lemmy.world 17 points 2 months ago

As adults we are dubious of the results that AI gives us. We take the answers with a handful of salt and I feel like over the years we have built up a skillset for using search engines for answers and sifting through the results. Kids haven't got years of experience of this and so they may take what is said to be true and not question the results.

As you say, the kids should be taught to use the tool properly, and verify the answers. AI is going to be forced onto us whether we like it or not, people should be empowered to use it and not accept what it puts out as gospel.

[-] Petter1@lemm.ee 8 points 2 months ago

This is true for the whole internet, not only AI Chatbots. Kids need to get teached that there is BS around. In fact kids had to learn that even pre-internet. Every human has to learn that you can not blindly trust anything, that one has to think critically. This is nothing new. AI chatbots just show how flawed human education is these days.

load more comments (3 replies)
[-] michaelmrose@lemmy.world 22 points 2 months ago

TLDR: ChatGPT is terrible at math and most students just ask it the answer. Giving students the ability to ask something that doesn't know math the answer makes them less capable. An enhanced chatBOT which was pre-fed with questions and correct answers didn't screw up the learning process in the same fashion but also didn't help them perform any better on the test because again they just asked it to spoon feed them the answer.

references

ChatGPT’s errors also may have been a contributing factor. The chatbot only answered the math problems correctly half of the time. Its arithmetic computations were wrong 8 percent of the time, but the bigger problem was that its step-by-step approach for how to solve a problem was wrong 42 percent of the time.

The tutoring version of ChatGPT was directly fed the correct solutions and these errors were minimized.

The researchers believe the problem is that students are using the chatbot as a “crutch.” When they analyzed the questions that students typed into ChatGPT, students often simply asked for the answer.

[-] MystikIncarnate@lemmy.ca 21 points 2 months ago

Something I've noticed with institutional education is that they're not looking for the factually correct answer, they're looking for the answer that matches whatever you were told in class. Those two things should not be different, but in my experience, they're not always the same thing.

I have no idea if this is a factor here, but it's something I've noticed. I have actually answered questions with a factually wrong answer, because that's what was taught, just to get the marks.

[-] fne8w2ah@lemmy.world 15 points 2 months ago

Taking too many shortcuts doesn't help anyone learn anything.

[-] Cornelius_Wangenheim@lemmy.world 14 points 2 months ago

This isn't a new issue. Wolfram alpha has been around for 15 years and can easily handle high school level math problems.

[-] zarcher@lemmy.world 33 points 2 months ago

Except wolfram alpha is able to correctly explain step by step solutions. Which was an aid in my education.

load more comments (5 replies)
[-] Maggoty@lemmy.world 10 points 2 months ago

ChatGPT lies which is kind of an issue in education.

As far as seeing the answer, I learned a significant amount of math by looking at the answer for a type of question and working backwards. That's not the issue as long as you're honestly trying to understand the process.

[-] Ilandar@aussie.zone 9 points 2 months ago

What do the results of the third group suggest? AI doesn't appear to have hindered their ability to manage by themselves under test conditions, but it did help them significantly with their practice results. You could argue the positive reinforcement an AI tutor can provide during test preparations might help some students with their confidence and pre-exam nerves, which will allow them to perform closer to their best under exam conditions.

load more comments (4 replies)
load more comments
view more: next ›
this post was submitted on 04 Sep 2024
891 points (100.0% liked)

Technology

59578 readers
2131 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS