9
submitted 7 months ago by Kit to c/machinelearning@lemmy.ml

Copilot sounds amazing on paper. The free (to 365 subs) version on the web is just Chat GPT4, so that's familiar enough. The integration with 365 applications is really what grabs me. Stuff like tossing it 10 spreadsheets and asking it to analyze and compare the data, having a virtual assistant to remind me of upcoming actionables, and summarizing a meeting when I zone out - it all sounds really handy.

I met with Microsoft last week and they're down for giving me a 90 day trial if I want to take it for a spin. Any thoughts or suggestions? I ideally want to determine if this will improve productivity for my end users enough to be worth the insane cost of $30/user/mo.

you are viewing a single comment's thread
view the rest of the comments
[-] ZapBeebz_@lemmy.world 1 points 7 months ago

I will never willingly use any sort of AI chatbot, especially not for anything approaching sensitive corporate data. I would honestly pay extra for a version of 365 WITHOUT copilot (but also I hate having to pay subscription anyways so maybe I'll end up dropping 365 instead?).

[-] Kit 1 points 7 months ago

Can I ask why you feel that way? I use Customer Key at my org so all of our data is encrypted with our own encryption keys, and no Copilot data goes outside of our ecosystem. This was a major selling point for my compliance dept.

[-] ZapBeebz_@lemmy.world 1 points 7 months ago

Because our society is not in any way, shape, or form ready for ai. One of the often touted end uses of AI is replacing busywork jobs. That's great, if our nation (America) didn't tie liveable income, healthcare, etc. to having a job. Without better social programs, and more importantly, a society that wants better social programs, ai will end up being a net negative for humanity.

Also, llms are wrong just as much as they're right (if not more often wrong), but are always confident. And in the world we live in where people generally pick the easy road, that means all the errors made never get caught or fact checked because people think AI is infallible. And if I have to go proofread everything copilot kicks out anyways, is it really saving me that much time? Especially because authorizing it increases the odds that people just trust the program, and don't check the outputs, so some wonky ass code or statements get published.

this post was submitted on 31 Mar 2024
9 points (100.0% liked)

Machine Learning

1782 readers
3 users here now

founded 4 years ago
MODERATORS