[-] nightsky@awful.systems 7 points 3 days ago

I'm just confused because I remember using Dragon Naturally Speaking for Windows 98 in the 90s and it worked pretty accurately already back then for dictation and sometimes it feels as if all of that never happened.

[-] nightsky@awful.systems 7 points 3 days ago

Similar case from 2 years ago with Whisper when transcribing German.

I'm confused by this. Didn't we have pretty decent speech-to-text already, before LLMs? It wasn't perfect but at least didn't hallucinate random things into the text? Why the heck was that replaced with this stuff??

[-] nightsky@awful.systems 29 points 1 week ago

I need to rant about yet another SV tech trend which is getting increasingly annoying.

It's something that is probably less noticeable if you live in a primarily English-speaking region, but if not, there is this very annoying thing that a lot of websites from US tech companies do now, which is that they automatically translate content, without ever asking. So English is pretty big on the web, and many English websites are now auto-translated to German for me. And the translations are usually bad. And by that I mean really fucking bad. (And I'm not talking about the translation feature in webbrowsers, it's the websites themselves.)

Small example of a recent experience: I was browsing stuff on Etsy, and Etsy is one of the websites which does this now. Entire product pages with titles and descriptions and everything is auto-translated, without ever asking me if I want that.

On a product page I then saw:

Material: gefühlt

This was very strange... because that makes no sense at all. "Gefühlt" is a form (participle) of the verb "fühlen", which means "to feel". It can be used in a past tense form of the verb.

So, to make sense of this you first have to translate that back to English, the past tense "to feel" as "felt". And of course "felt" can also mean a kind of fabric (which in German is called "Filz"), so it's a word with more than one meaning in English. You know, words with multiple meanings, like most words in any language. But the brilliant SV engineers do not seem to understand that you cannot translate words without the context they're in.

And this is not a singular experience. Many product descriptions on Etsy are full of such mistakes now, sometimes to the point of being downright baffling. And Ebay does the same now, and the translated product titles and descriptions are a complete shit show as well.

And Youtube started replacing the audio of English videos by default with AI-auto-generated translations spoken by horrible AI voices. By default! It's unbearable. At least there's a button to switch back to the original audio, but I keep having to press it. And now Youtube Shorts is doing it too, except that the YT Shorts video player does not seem to have any button to disable it at all!

Is it that unimaginable for SV tech that people speak more than one language? And that maybe you fucking ask before shoving a horribly bad machine translation into people's faces?

[-] nightsky@awful.systems 27 points 1 month ago

Wait, they're still drinking coffee, and it takes 58 minutes? Jeez, my genai quantum robot does that in 58 seconds and summarizes the gustatory experience into 3 bullet points for me. Get with the times.

[-] nightsky@awful.systems 30 points 1 month ago

160,000 organisations, sending 251 million messages! [...] A message costs one cent. [...] Microsoft is forecast to spend $80 billion on AI in 2025.

No problem. To break even, they can raise prices just a little bit, from one cent per message to, uuh, $318 per message. I don't think that such a tiny price bump is going to reduce usage or scare away any customers, so they can just do that.

[-] nightsky@awful.systems 29 points 1 month ago

From McCarthy's reply:

My current answer to the question of when machines will reach human-level intelligence is that a precise calculation shows that we are between 1.7 and 3.1 Einsteins and .3 Manhattan Projects away from the goal.

omg this statement sounds 100% like something that could be posted today by Sam Altman on X. It's hititing exactly the sweet spot between appearing precise but also super vague, like Altman's "a few thousand days".

[-] nightsky@awful.systems 23 points 2 months ago

If the companies wanted to produce an LLM that didn’t output toxic waste, they could just not put toxic waste into it.

The article title and that part remind me of this quote from Charles Babbage in 1864:

On two occasions I have been asked, — "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

It feels as if Babbage had already interacted with today's AI pushers.

[-] nightsky@awful.systems 23 points 2 months ago

I hate this position so much, claiming that it's because "the left" wanted "too much". That's not only morally bankrupt, it's factually wrong too. And also ignorant of historical examples. It's lazy and rotten thinking all the way through.

[-] nightsky@awful.systems 25 points 6 months ago

So much wrong with this...

In a way, it reminds me of the wave of entirely fixed/premade loop-based music making tools from years ago. Where you just drag and drop a number of pre-made loops from a library onto some tracks, and then the software automatically makes them fit together musically and that's it, no further skill or effort required. I always found that fun to play around with for an evening or two, but then it quickly got boring. Because the more you optimize away the creative process, the less interesting it becomes.

Now the AI bros have made it even more streamlined, which means it's even more boring. Great. Also, they appear to think that they are the first people to ever have the idea "let's make music making simple". Not surprising they believe that, because a fundamental tech bro belief is that history is never interesting and can never teach anything, so they never even look at it.

[-] nightsky@awful.systems 28 points 6 months ago

Or they’ll be “AGI” — A Guy Instead.

Lol. This is perfect. Can we please adopt this everywhere.

As for the OpenAI statement... it's interesting how it starts with "We are now confident [...]" to make people think "ooh now comes the real stuff"... but then it quickly makes a sharp turn towards weasel words: "We believe that [...] we may see [...]" . I guess the idea is that the confidence from the first part is supposed to carry over to the second, while retaining a way to later say "look, we didn't promise anything for 2025". But then again, maybe I'm ascribing too much thoughtfulness here, when actually they just throw out random bullshit, just like their "AI".

[-] nightsky@awful.systems 22 points 7 months ago

With your choice of words you are anthropomorphizing LLMs. No valid reasoning can occur when starting from a false point of origin.

Or to put it differently: to me this is similarly ridiculous as if you were arguing that bubble sort may somehow "gain new abilites" and do "horrifying things".

[-] nightsky@awful.systems 21 points 9 months ago

I wonder if this signals being at peak hype soon. I mean, how much more outlandish can they get without destroying the hype bubble's foundation, i.e. the suspension of disbelief that all this would somehow become possible in the near future. We're on the level of "arrival of an alien intelligence" now, how much further can they escalate that rhetoric without popping the bubble?

view more: next ›

nightsky

joined 10 months ago