899
submitted 1 year ago by L4s@lemmy.world to c/technology@lemmy.world

College professors are going back to paper exams and handwritten essays to fight students using ChatGPT::The growing number of students using the AI program ChatGPT as a shortcut in their coursework has led some college professors to reconsider their lesson plans for the upcoming fall semester.

you are viewing a single comment's thread
view the rest of the comments
[-] HexesofVexes@lemmy.world 131 points 1 year ago

Prof here - take a look at it from our side.

Our job is to evaluate YOUR ability; and AI is a great way to mask poor ability. We have no way to determine if you did the work, or if an AI did, and if called into a court to certify your expertise we could not do so beyond a reasonable doubt.

I am not arguing exams are perfect mind, but I'd rather doubt a few student's inability (maybe it was just a bad exam for them) than always doubt their ability (is any of this their own work).

Case in point, ALL students on my course with low (<60%) attendance this year scored 70s and 80s on the coursework and 10s and 20s in the OPEN BOOK exam. I doubt those 70s and 80s are real reflections of the ability of the students, but do suggest they can obfuscate AI work well.

[-] kromem@lemmy.world 7 points 1 year ago

Is AI going to go away?

In the real world, will those students be working from a textbook, or from a browser with some form of AI accessible in a few years?

What exactly is being measured and evaluated? Or has the world changed, and existing infrastructure is struggling to cling to the status quo?

Were those years of students being forced to learn cursive in the age of the computer a useful application of their time? Or math classes where a calculator wasn't allowed?

I can hardly think just how useful a programming class where you need to write it on a blank page of paper with a pen and no linters might be, then.

Maybe the focus on where and how knowledge is applied needs to be revisited in light of a changing landscape.

For example, how much more practically useful might test questions be that provide a hallucinated wrong answer from ChatGPT and then task the students to identify what was wrong? Or provide them a cross discipline question that expects ChatGPT usage yet would remain challenging because of the scope or nuance?

I get that it's difficult to adjust to something that's changed everything in the field within months.

But it's quite likely a fair bit of how education has been done for the past 20 years in the digital age (itself a gradual transition to the Internet existing) needs major reworking to adapt to changes rather than simply oppose them, putting academia in a bubble further and further detached from real world feasibility.

[-] SkiDude@lemmy.world 28 points 1 year ago

If you're going to take a class to learn how to do X, but never actually learn how to do X because you're letting a machine do all the work, why even take the class?

In the real world, even if you're using all the newest, cutting edge stuff, you still need to understand the concepts behind what you're doing. You still have to know what to put into the tool and that what you get out is something that works.

If the tool, AI, whatever, is smart enough to accomplish the task without you actually knowing anything, what the hell are you useful for?

[-] prosp3kt@lemmy.world 2 points 1 year ago

But that's actually most of the works we have nowadays. IA is replacing repetitive works such as magazine writers or script writers

[-] ZeroHora@lemmy.ml 3 points 1 year ago

Writers are repetitive work????

[-] prosp3kt@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

Well, it seems they will be replaced, at least certain writers. https://www.npr.org/2023/05/20/1177366800/striking-movie-and-tv-writers-worry-that-they-will-be-replaced-by-ai Also, callcenters https://www.bbc.com/news/business-65906521 And junior programmers. The problem here it's not my opinion, those already happen so its not debatable.

[-] barsoap@lemm.ee 3 points 1 year ago

And junior programmers

...no. Juniors are hard enough to mentor to write sensible code in the first place adding AI to that is only making things worse.

The long-term impacts on AI past what's already happening (and having an actual positive impact on the products and companies, that is, discount that Hollywood scriptwriting stuff) will be in industrial automation and logistics/transportation. Production lines that can QC on their own as well as a whole army of truck and taxi drivers. AI systems will augment fields such as medicine, but not replace actual doctors. Think providing alternative diagnosis possibilities and recommending suitable tests to be sure kind of stuff, combatting routine tunnel vision by, precisely, being less adaptive than human doctors.

[-] ZeroHora@lemmy.ml 1 points 1 year ago

I understand that they'll be replaced, or at least the producers want thant, but I don't think that's because of repetitive work, more like they need a lot of them.

load more comments (8 replies)
load more comments (19 replies)
load more comments (78 replies)
this post was submitted on 13 Aug 2023
899 points (100.0% liked)

Technology

59035 readers
2763 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS