330

Just out of curiosity. I have no moral stance on it, if a tool works for you I'm definitely not judging anyone for using it. Do whatever you can to get your work done!

top 50 comments
sorted by: hot top controversial new old
[-] Atramentous@lemm.ee 95 points 1 year ago

High school history teacher here. It’s changed how I do assessments. I’ve used it to rewrite all of the multiple choice/short answer assessments that I do. Being able to quickly create different versions of an assessment has helped me limit instances of cheating, but also to quickly create modified versions for students who require that (due to IEPs or whatever).

The cool thing that I’ve been using it for is to create different types of assessments that I simply didn’t have the time or resources to create myself. For instance, I’ll have it generate a writing passage making a historical argument, but I’ll have AI make the argument inaccurate or incorrectly use evidence, etc. The students have to refute, support, or modify the passage.

Due to the risk of inaccuracies and hallucination I always 100% verify any AI generated piece that I use in class. But it’s been a game changer for me in education.

[-] Atramentous@lemm.ee 37 points 1 year ago

I should also add that I fully inform students and administrators that I’m using AI. Whenever I use an assessment that is created with AI I indicate with a little “Created with ChatGPT” tag. As a history teacher I’m a big believer in citing sources :)

[-] limeaide@lemmy.ml 10 points 1 year ago

How has this been received?

I imagine that pretty soon using ChatGPT is going to be looked down upon like using Wikipedia as a source

[-] Atramentous@lemm.ee 10 points 1 year ago

I would never accept a student’s use of Wikipedia as a source. However, it’s a great place to go initially to get to grips with a topic quickly. Then you can start to dig into different primary and secondary sources.

Chat GPT is the same. I would never use the content it makes without verifying that content first.

load more comments (2 replies)
[-] phillaholic@lemm.ee 17 points 1 year ago

Is it fair to give different students different wordings of the same questions? If one wording is more confusing than another could it impact their grade?

[-] GhostlyPixel@lemmy.world 11 points 1 year ago* (last edited 1 year ago)

I had professors do different wordings for questions throughout college, I never encountered a professor or TA that wouldn’t clarify if asked, and, generally, the amount of confusing questions evened out across all of the versions, especially over a semester. They usually aren’t doing it to trick students, they just want to make it harder for one student to look at someone else’s test.

There is a risk of it negatively impacting students, but encouraging students to ask for clarification helps a ton.

load more comments (2 replies)
load more comments (2 replies)
load more comments (5 replies)
[-] CptInsane0@lemmy.world 81 points 1 year ago* (last edited 1 year ago)

I don't have any bosses, but as a consultant, I use it a lot. Still gotta charge for the years of experience it takes to understand the output and tweak things, not the hours it takes to do the work.

[-] Thaolin@sh.itjust.works 39 points 1 year ago

Basically this. Knowing the right questions and context to get an output and then translating that into actionable code in a production environment is what I'm being paid to do. Whether copilot or GPT helps reach a conclusion or not doesn't matter. I'm paid for results.

[-] flynnguy@programming.dev 61 points 1 year ago

I had a coworker come to me with an "issue" he learned about. It was wrong and it wasn't really an issue and the it came out that he got it from ChatGPT and didn't really know what he was talking about, nor could he cite an actual source.

I've also played around with it and it's given me straight up wrong answers. I don't think it's really worth it.

It's just predictive text, it's not really AI.

[-] Echo71Niner@kbin.social 22 points 1 year ago

I concur. ChatGPT is, in fact, not an AI; rather, it operates as a predictive text tool. This is the reason behind the numerous errors it tends to generate and its lack of self-review prior to generating responses is clearest indication of it not being an AI. You can identify instances where CHATGPT provides incorrect information, you correct it, and within 5 seconds of asking again, it repeat the same inaccurate information in its response.

[-] rbhfd@lemmy.world 22 points 1 year ago

It's definitely not artificial general intelligence, but it's for sure AI.

None of the criteria you mentioned are needed for it be labeled as AI. Definition from Oxford Libraries:

the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

It definitely fits in this category. It is being used in ways that previously, customer support or a domain expert was needed to talk to. Yes, it makes mistakes, but so do humans. And even if talking to a human would still be better, it's still a useful AI tool, even if it's not flawless yet.

load more comments (2 replies)
[-] dbilitated@aussie.zone 8 points 1 year ago

i think learning where it can actually help is a bit of an art - it's just predictive text, but it's very good predictive text - if you know what you need and get good and giving it the right input it can save a huge about of time. you're right though, it doesn't offer much if you don't already know what you need.

load more comments (3 replies)
load more comments (2 replies)
[-] paNic@feddit.uk 51 points 1 year ago

A junior team member sent me an AI-generated sick note a few weeks ago. It was many, many neat and equally-sized paragraphs of badly written excuses. I would have accepted "I can't come in to work today because I feel unwell" but now I can't take this person quite so seriously any more.

[-] ante@lemmy.world 18 points 1 year ago

Classic over explaining to cover up a lie.

I never send anything other than "I'll be out of the office today" for every PTO notice.

load more comments (1 replies)
[-] ThatOneDudeFromOhio@lemmy.world 11 points 1 year ago

Ask yourself why they felt the need to generate an AI sick note instead of being honest 👌

[-] some_guy@lemmy.sdf.org 9 points 1 year ago

I dunno, I'd consider it a moral failing on the part of the person who couldn't be honest and direct, even if there's a cultural issue in the workplace.

load more comments (2 replies)
[-] Lockely@pawb.social 40 points 1 year ago

I've played around with it for personal amusement, but the output is straight up garbage for my purposes. I'd never use it for work. Anyone entering proprietary company information into it should get a verbal shakedown by their company's information security officer, because anything you input automatically joins their training database, and you're exposing your company to liability when, not if, OpenAI suffers another data breach.

[-] lemmyvore@feddit.nl 14 points 1 year ago

The very act of sharing company information with it can land you and the company in hot water in certain industries. Regardless if OpenAI is broken into.

load more comments (1 replies)
[-] givesomefucks@lemmy.world 39 points 1 year ago

A lot of people are going to get fucked if they are...

It's using the "startup method" where they gave away a good service for free, but they already cut back on resources when it got popular. So what you read about it being able to do six months ago, it can't do today.

Eventually they'll introduce a paid version that might be able to do what the free one did.

But if you're just blindly trusting it, you might have months of low quality work and haven't noticed.

Like the lawyers recently finding out it would just make up caselaw and reference cases. We're going to see that happen more and more as resources are cut back.

[-] redballooon@lemm.ee 21 points 1 year ago* (last edited 1 year ago)

Huh? They already introduced the paid version half a year ago, and that was the one being responsible for the buzz all along. The free version was mediocre to begin with and has not gotten better.

When people complain that ChatGPT doesn’t comply to their expectations it’s usually a confusion between these two.

[-] manillaface@kbin.social 18 points 1 year ago

Like the lawyers recently finding out it would just make up caselaw and reference cases. We’re going to see that happen more and more as resources are cut back.

It’s been notorious for doing that from the very beginning though

[-] li10@feddit.uk 10 points 1 year ago

Anyone blindly trusting it is a grade A moron, and would’ve just found another way to fuck up whatever they were working on if ChatGPT didn’t exist.

ChatGPT is a tool, if someone doesn’t know what they’re doing with it then they are gonna break stuff, not ChatGPT.

load more comments (2 replies)
[-] diffuselight@lemmy.world 8 points 1 year ago

That may have been their plan, but Meta fucked them from behind and released LLama which now runs on local machines, up to 30B parameter size and by end of the year will run at better than GPt3.5 ability on an iphone.

Local llms, like airoboros, WizardLm, Stable Vicuña or Stable Coder are real alternatives in many domains.

load more comments (1 replies)
[-] platypode@sh.itjust.works 37 points 1 year ago

I've been using it a little to automate really stupid simple programming tasks. I've found it's really bad at producing feasible code for anything beyond the grasp of a first-year CS student, but there's an awful lot of dumb code that needs to be written and it's certainly easier than doing it by hand.

As long as you're very precise about what you want, you don't expect too much, and you check its work, it's a pretty useful tool.

[-] jecxjo@midwest.social 9 points 1 year ago

I've found it useful for basically finding the example code for a 3rd party library. Basically a version of Stack Exchange that can be better or worse.

load more comments (2 replies)
load more comments (3 replies)
[-] fidodo@lemm.ee 26 points 1 year ago* (last edited 1 year ago)

Why should anyone care? I don't go around telling people every time I use stack overflow. Gotta keep in mind gpt makes shit up half the time so I of course test and cross reference everything but it's great for narrowing your search space.

[-] akulium@feddit.de 16 points 1 year ago

I did some programming assignments in a group of two. Every time, my partner sent me his code without further explanation and let me check his solution.

The first time, his code was really good and better than I could have come up with, but there was a small obvious mistake in there. The second time his code to do the same thing was awful and wrong. I asked him whether he used ChatGPT and he admitted it. I did the rest of the assignments alone.

I think it is fine to use ChatGPT if you know what you are doing, but if you don't know what you are doing and try to hide it with ChatGPT, then people will find out. In that case you should discuss with the people you are working with before you waste their time.

load more comments (8 replies)
load more comments (2 replies)
[-] JoCrichton@lemmy.world 19 points 1 year ago

Not sure how it could help me solder or find faults on PCBs.

load more comments (2 replies)
[-] RagnarokOnline@reddthat.com 18 points 1 year ago

Only used it a couple of times for work when researching some broad topics like data governance concepts.

It’s a good tool for learning because you can ask it about a subject and then ask it to explain the subject “as a metaphor to improve comprehension” and it does a pretty good job. Just make sure you use some outside resources to ensure you’e not being hallucinated all over.

My bosses use it to write their emails (ESL).

load more comments (4 replies)
[-] bitsplease@lemmy.ml 17 points 1 year ago* (last edited 1 year ago)

not chatGPT - but I tried using copilot for a month or two to speed up my work (backend engineer). Wound up unsubscribing and removing the plugin after not too long, because I found it had the opposite effect.

Basically instead of speeding my coding up, it slowed it down, because instead of my thought process being

  1. Think about the requirements
  2. Work out how best to achieve those requirements within the code I'm working on
  3. Write the code

It would be

  1. Think about the requirements
  2. Work out how best to achieve those requirements within the code I'm working on
  3. Start writing the code and wait for the auto complete
  4. Read the auto complete and decide if it does exactly what I want
  5. Do one of the following depending on 4 5a. Use the autocomplete as-is 5b. Use the autocomplete then modify to fix a few issues or account for a requirement it missed 5c. Ignore the autocomplete and write the code yourself

idk about you, but the first set of steps just seems like a whole lot less hassle then the second set of steps, especially since for anything that involved any business logic or internal libraries, I found myself using 5c far more often than the other two. And as a bonus, I actually fully understand all the code committed under my username, on account of actually having wrote it.

I will say though in the interest of fairness, there were a few instances where I was blown away with copilot's ability to figure out what I was trying to do and give a solution for it. Most of these times were when I was writing semi-complex DB queries (via Django's ORM), so if you're just writing a dead simple CRUD API without much complex business logic, you may find value in it, but for the most part, I found that it just increased cognitive overhead and time spent on my tickets

EDIT: I did use chatGPT for my peer reviews this year though and thought it worked really well for that sort of thing. I just put in what I liked about my coworkers and where I thought they could improve in simple english and it spat out very professional peer reviews in the format expected by the review form

load more comments (3 replies)
[-] henfredemars@infosec.pub 16 points 1 year ago* (last edited 1 year ago)

I use it to write performance reviews because in reality HR has already decided the results before the evaluations.

I'm not wasting my valuable time writing text that is then ignored. If you want a promotion, get a new job.

To be clear: I don't support this but it's the reality I live in.

load more comments (5 replies)
[-] CaptainPike@beehaw.org 14 points 1 year ago

I'm a DM using ChatGPT to help me build things for my DnD campaign/world and not telling my players. Does that count? I still do most of the heavy lifting but it's nice to be able to brainstorm and get ideas bounced back. I don't exactly have friends to do that with.

load more comments (5 replies)
[-] PurpleTentacle@sh.itjust.works 13 points 1 year ago

As a language model, I have neither boss nor co-workers.

Some of my co-workers use it, and it's fairly obvious, usually because they are putting out even more inaccurate info than normal.

load more comments (2 replies)
[-] Fizz@lemmy.nz 12 points 1 year ago

When I'm pissed off I use it to make my emails sound friendly.

[-] Haus@kbin.social 10 points 1 year ago

Yesterday I was working on a training PowerPoint and it occurred to me that I should probably simplify the language. Had GPT convert it to 3rd-grade language, and it worked pretty well. Not perfect, but it helped.

I'm also writing an app as a hobby and, although GPT goes batshit crazy from time to time, overall it has done most of the coding grunt-work pretty well.

load more comments (3 replies)
[-] CylonBunny@lemmy.world 10 points 1 year ago* (last edited 1 year ago)

I find it helpful to translate medical abbreviations to English. Our doctors tend to go overboard with abbreviations, there are lots I know but there are always a few that leave me scratching my head. ChatGPT seems really good at guessing what they mean! There are other tools I can use, but ChatGPT is faster and more convenient - I can give it context and that makes it more accurate.

[-] HR_Pufnstuf@lemmy.world 10 points 1 year ago

I've done so on rare occasion, but every time it made stuff up. Wanted terraform examples for specific things... and it completely invented resource types that don't exist.

load more comments (4 replies)
[-] awkwardparticle@kbin.social 10 points 1 year ago

My whole team was playing around with it and for a few weeks it was working pretty well for a coupl3 of things. Until the answers started to become incorrect and not useful.

[-] mojo@lemm.ee 10 points 1 year ago

Yes, although there's been a huge spike in cancer diagnosis I've been giving out since doing so. Whoops!

[-] a_seattle_ian@lemmy.ml 9 points 1 year ago* (last edited 1 year ago)

I'm interested in finding ways to use it but when if I'm writing code I really like the spectrum of different answers on stack overflow with comment's on WHY they did it that way. Might use it for boring emails though.

load more comments (4 replies)
[-] jayemecee@lemmy.world 9 points 1 year ago* (last edited 1 year ago)

I'm a devops engineer, use it daily. Not to write e-mails, but to frequently ask the best approach to solve an issue or bash/sql/anything queries. My boss and colleagues know about it and use it too though

[-] Jay@sh.itjust.works 8 points 1 year ago

I use ChatGPT fairly frequently. For example, I often have to write a business email. I'm usually pretty good at it. But sometimes I don't have the time or desire to find the right wording. This is where ChatGPT comes into play: I have trained my writing style using several examples and then simply have the quickly written emails beautified.

My boss doesn't know about it, but I don't hide it either. My company is very, very slow on the technical side and will only understand the benefits of AI in a few years.

load more comments (2 replies)
[-] hsl@wayfarershaven.eu 8 points 1 year ago

My job actively encourages using AI to be more efficient and rewards curiosity/creative approaches. I'm in IT management.

[-] Behaviorbabe@kbin.social 8 points 1 year ago

Coworker of mine admitted to using this for writing treatment plans. Super unethical and unrepentant about it. Why? Treatment plans are individual, and contain PII. I used it for research a few times and it returned sources that are considered bunk at best and hated within the community for their history. So I just went back to my journal aggregation.

load more comments (1 replies)
[-] fede@lemmy.world 8 points 1 year ago

I use it for help with formal language sometimes, but I do not trust it and would never try to pass off a whole generated text as mine. I always review it and try to make it sound my own.

[-] WackyTabbacy42069@reddthat.com 8 points 1 year ago

I use GPT-4 daily. I worked with it to create a quick and convenient app on my smartwatch, which allows it to provide wisdom and guidance fast whenever I need it. For more grandular things, I use its BingChat interface which can search the web and see images. The AI has helped me with understanding how to complete tasks, providing counseling for me, finding bugs in my code, writing functions, teaching me how to use software like Excel and Outlook, and giving me random information about various curiosities that pop into mind.

I don't keep it a secret and tell anyone who asks. Plus it's kinda obvious that something is going on with me. I always wear bone conducting headsets that allow the AI to whisper in my ear without shutting me out to the world, and sometimes talk to my watch

The responses to knowing what I'm doing have almost always been extreme: very positive or very negative. The machine is controversial, and when some can no longer stay in comfortable denial of its efficacy they turn to speaking out against its use

load more comments (3 replies)
[-] aaaaaaadjsf@hexbear.net 8 points 1 year ago* (last edited 1 year ago)

I know many people my slightly younger than me are using chatgpt to breeze though university assignments. Apparently there's one website that uses gpt that even draws diagrams for you, so you don't have to make 500 UML and class diagrams that take forever to create.

load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 10 Aug 2023
330 points (100.0% liked)

Asklemmy

43746 readers
971 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS