Simple solution. Ask the student to talk about their paper. If they know the subject matter, the point of the assignment is meant.
This is the right answer. No tool can detect AI generated content with zero false positives, but someone using AI to cheat won't actually know the subject matter.
That's great for some people, but would be absolutely horrible for people like me. I usually know the subject matter, but I tend to have problems gettingy thoughts out of my head. So I'd just end up getting double screwed if I were in this situation.
I'm reminded of the lecturer who was accused of being an AI when they sent an email.
Getting the triple-whammy of being accused of using an AI when you didn't, drawing a blank during an oral interview/explanation, and then being penalised like you'd used one anyway, would be hellish.
Same. The anxiety kicks in and everything you ever knew leaves your brain in the span of half a second and doesn’t come back until the other person is free and clear of your presence.
I had to do a lot of presenting in college, which is more or less the same thing. There were peers who struggled with that, but they always talked with the Professors and I never came across a hard ass that would penalize them for it. Might not even be legal if it’s a medical condition.
turn it in is a fucking content farm anyway. you sign over your rights to them. we should insist schools stop using it.
You sign over rights to your works when you turn them in for grades anyway. The school can do whatever they want with your papers.
Which is such a fucking scam. You're paying the school so the school has rights to your shit somehow?
My friend put his own Masters Thesis on libgen because fuck that absolute horseshit.
Based friend
Do you really? As in if you do a project and submit it it is then the property of the school? For instance if you wrote a program or did a research project, the school would have rights to sell it and not you? I had never heard that before.
I've been at the front of the classroom--using tools like TurnItIn is fine for getting "red flags," but I'd never rely on just tools to give someone a zero.
First, unless you're in a class with a hundred people, the professor would have a general idea as to whether you're putting in effort--are they attentive? Do they ask questions? And an informal talk with the person would likely determine how well they understand the content in the paper. Even for people who can't articulate well, there are questions you can ask that will give you a good feel for whether they wrote it.
I've caught cheaters several times, it's not that hard. Will a few slide through? Yes, but they will regardless of how many stupid AI tools you use. Give the students the benefit of the doubt and put in some effort, lazy profs.
Anyone marking an assignment with a TurnItIn report, who is also in possession of half a brain, knows to read through the report and check where the matches are coming from. A high similarity score can come about for many reasons, and in my experience most of those reasons are not due to cheating.
I've also been the one on the opposite side of the classroom. I was lab based, so we didn't use Turn it in.
With a reasonably sized class, you can easily spot which students have worked together because their reports tend to be shockingly similar.
I agree that you get a feel for them with informal conversations and you can see how their submissions tie up with your informal conversations.
I used to tweak the questions year on year. I've suspected there is a black market, an assignment exchange, or something because I caught students submitting work from previous years. They were mainly international students that were only there for their masters year.
A professor once accused me of cheating because he mixed up my project with another students, marked that students project twice, and assumed i copied them.... Acedemia is not always the place of enlightenment people imagine....
Academia is not always the place of enlightenment people imagine....
Was it ever?
Yes but it's been quite a while since it was. Now it's a heinous cash grab that puts young people, that don't understand basic finance, into lifelong debt. Long ago a tool like this would've probably been adopted by academia as a tool you need to learn to leverage on order to get to a better, more thorough, understanding of a subject. We've capitalismed education and it's hurting everyone.
Something my instructors could never explain to me is what Turnitin does with the content of papers after they're scanned. How long are they kept? Are they used for verifying anyone else's work? I didn't consent to any of that. When someone runs for office 20 years later are they going to leak old papers? Are they selling that data to other AI trainers? That's some fucking bullshit. It needs to be out of the classroom for more reasons than just false positives.
It gets added to their database forever as far as I know. Unsure if they're selling it but based on the trajectory of capitalism yes they're selling the fuck out of to anyone who will buy.
I remember seeing some fine print when signing agreements for my college that any papers I write are intellectual property of the school. I'm guessing that's standard nowadays.
College: You will pay is 30k a year and all your base belong to us!
Here where I live using AI detection tools is not allowed because they are not 100% correct, which means they might flag an innocent student.
"It is better that one hundred innocent college students fail a class than that one guilty college student write a paper with AI." - Benjamin Academic
Not shocked that this comes from TurnItIn. Has always been a garbage service in my experience. Only useful for flagging quotes, citations, class/insturctor names, and my own name as plagiarism.
I saw it flagging "the [...]. I am [...]" it didn't even care about the words in between, just decided to highlight the most common words in English in that one paragraph out of spite I guess.
It also once flagged my page numbering lmao, like I'm sorry I didn't know I had to come up with a new and exciting numeric system for every essay I submit
So the teacher uses an unreliable AI tool to do his job, to teach a student a lesson about allegedly using an AI tool to do her work, and the only evidence he has is "this proprietary block box language model says you plagiarized this assignment". No actual plagerism to cite, just a computer generated response arbitrarily making accusations. What's the lesson here? AI models are so unreliable, when we use them we punish you for things you didn't do, so don't you dare use them for schoolwork?
It has a 1% false positive rate. If you have students turn in 20 assignments each semester, 1 in 5 students will get disciplined for plagiarism they didn't commit. All because a teacher was too lazy to do his job without blindly accepting the results of an AI tool, while pretending that they are against such things as a matter of academic integrity...
I remember when grammar and spellcheck tools became available, it was hilarious running well-known texts through them and accepting all the changes.
What an awful website
I feel like this article was mostly interested in just showing her pictures. 🙄
She can just show them the version history in Google Docs.
You may not agree with the policy or the tools used, but the rules were clear, and at this point she has no evidence that she did not use some other Generative AI tool. It’s just her word against another AI that is trained to detect generated material.
What is telling is her reaction to all of this, literally making a national news story because she was flagged as a cheater. I promise if she wasn’t white or attractive NY Post wouldn’t do anything. What a massive self own. Long after she leaves school this story will be the top hit on a google search of her name and she will out herself as a cheater.
You shouldn't put too much stock in these detection tools. Not only do they not work, they flag non-native English speakers for cheating more than native speakers.
And flags anything you've previously submitted, including plans.
What clear rule did she violate though? Like, Grammerly isn't an AI tool. It's a glorified spell check. And several of her previous professors had recommended it's use.
What she did "wrong" was write something that TurnItIn decided to flag as AI generated, which it's incredibly far from 100% accurate at.
Like, what should she have done differently?
i don't believe she cheated, but i also don't care.
i do think being a conventionally attractive blonde did help her get coverage.
i also want turn it in to die in a fire.
i'm very conflicted about your comment, but i'm not conflicted about this situation at all: stop using turn it in, and put the girl back in school.
You can't create a relaiable AI to detect AI. Anyone that told you otherwise is selling you snakeoil.
How do you provide evidence you didn't use something????
I can make an offline AI say absolutely anything in any way shape or form I would like. It is a tool that improves efficiency in those smart enough to use it. There is nothing about it that is different than what a human can write.
This is as stupid as all of the teachers that used to prevent us from using calculators for math 20 years ago. We should be encouraging everyone to adapt and adopt new technology that improves efficiency, and take on the real task of testing students with intelligent adaptive techniques. It is the antiquated mindset and academia that is the problem. Anyone that can't adapt should be removed. When the student enters the workforce, their use of such efficiency improving tools is critical.
Unable to delete so editing instead. Leaving Lemmy.world due to privacy concerns.
Writing a paper isn't about efficiency, it's about forcing you to synthesize concepts and ideas such that they become more concrete in your mind. It, in itself, is the learning tool. It isn't something to be checked off and chruned through like a widget you make at a factory.
Your comment just sounds like you lack, I don't know, care in regards to learning.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed