[-] mountainriver@awful.systems 6 points 9 hours ago

One author (Daniel) correctly predicted chain-of-thought reasoning, inference scaling, and sweeping chip export controls one year BEFORE ChatGPT existed

Ah, this reminds me of an old book I came across years ago. Printed around 1920 it spent the first half with examples of how the future has been foretold correctly many, many times across history. The author had also made several correct foretellings, among them the Great War. Apparently he tried to warn the Kaiser.

The second half was his visions of the future including a great war...

Unfortunately it was France and Russia invading the Nordic countries in the 1930ies. The Franco-Russian alliance almost got beat thanks to new electric weapons, but then God himself intervened and brought the defenders low because the people had been sining and turning away from Christianity.

An early clue to the author being a bit particular was when he argued that he got his ability to predict the future because he was one quarter Sami, but could still be trusted because he was "3/4 solid Nordic stock". Best combo apparently and a totally normal way to describe yourself.

[-] mountainriver@awful.systems 4 points 10 hours ago

It's easy to read it as first and fourth "world" but it's actually first and fourth "word". But the first and fourth word of what? Mein Kampf? The 18 words?

[-] mountainriver@awful.systems 3 points 22 hours ago

Sharing the suffering multiples the suffering.

[-] mountainriver@awful.systems 14 points 1 day ago

I usually go with "Scientology for the 21st century". That for most gives just "weird cult", which is close enough for most people.

For those that are into weird cults you get questions about Xenu and such, and can answer "No they are not into Xenu, instead they want to build their god. Out of chatbots". And so on. If they are interested in weird cult shit, and have already accepted that we are talking about weird cults the weirdness isn't a problem. If not, it stops at "Scientology for the 21st century".

15

Capgemini has polled executives, customer service workers and consumers (but mostly executives) and found out that customer service sucks, and working in customer service sucks even more. Customers apparently want prompt solutions to problems. Customer service personnel feels that they are put in a position to upsell customers. For some reason this makes both sides unhappy.

Solution? Chatbots!

There is some nice rhetorical footwork going on in the report, so it was presumably written by a human. By conflating chatbots and live chat (you know, with someone actually alive) and never once asking whether the chatbots can actually solve the problems with customer service, they come to the conclusion that chatbots must be the answer. After all, lots of the surveyed executives think they will be the answer. And when have executives ever been wrong?

[-] mountainriver@awful.systems 51 points 3 weeks ago

That was gross.

On a related note, one of my kids learnt about how phrenology was once used for scientific racism and my other kid was shocked, dismayed and didn't want to believe it. So I had to confirm that yes people did that, yes it was very racist, and yes they considered themselves scientists and were viewed as such by the scientific community of the time.

I didn't inform them that phrenology and scientific racism is still with us. There is a limit on how many illusions you want to break in a day.

[-] mountainriver@awful.systems 22 points 2 months ago

So Elsevier has evolved from gatekeeping science to sabotaging science. Sounds like something an unaligned AGI would do.

Was the unaligned AGI capitalism all along?

[-] mountainriver@awful.systems 22 points 3 months ago

Tech bro ennui, the societal problem.

In this essay I will explore solutions to this problems.

Solution 1. Really high marginal tax rates. Oh, this solves the problem, guess my work here is done.

[-] mountainriver@awful.systems 40 points 3 months ago

I found the article gross.

He is a suspect in a murder case, not convicted, and they spend very little space on the case. The cops say he had his fake id, the gun and manifesto on him. His lawyer says he is yet to see the evidence. That is all.

Then they basically go through posts he has made online and ask people he knew about them. There is a public interest in the case, but courts are supposed to adjudicate guilt. What if he is innocent, then they just went through his posting history and showed them in the worst possible light.

[-] mountainriver@awful.systems 28 points 3 months ago

I started thinking about when Emma Goldman's partner Alexander Berkman tried to kill a 19th century robber baron who had sent in Pinkerton to murder workers into ending a strike.

One can make an argument about the economic conditions creating the condition's for what the anarchists back then called "the propaganda of the deed". But that isn't where I am going. Instead let's look at the aftermath.

From an assassination perspective, the quality of the assassination was lacking. Also, Wikipedia (my bold):

Frick was back at work within a week; Berkman was charged and found guilty of attempted murder. Berkman's actions in planning the assassination clearly indicated a premeditated intent to kill, and he was sentenced to 22 years in prison.[5] Negative publicity from the attempted assassination resulted in the collapse of the strike.[19]

In other words, today's robber barons gets less sympathy than the O.G. kind. That's a bit interesting.

[-] mountainriver@awful.systems 23 points 4 months ago

At work, I've been looking through Microsoft licenses. Not the funniest thing to do, but that's why it's called work.

The new licenses that have AI-functions have a suspiciously low price tag, often as introductionary price (unclear for how long, or what it will cost later). This will be relevant later.

The licenses with Office, Teams and other things my users actually use are not only confusing in how they are bundled, they have been increasing in price. So I have been looking through and testing which licenses we can switch to a cheaper, without any difference for the users.

Having put in quite some time with it, we today crunched the numbers and realised that compared to last year we will save... (drumroll)... Approximately nothing!

But if we hadn't done all this, the costs would have increased by about 50%.

We are just a small corporation, maybe big ones gets discounts. But I think it is a clear indication of how the AI slop is financed, by price gauging corporate customers for the traditional products.

[-] mountainriver@awful.systems 23 points 6 months ago

Repeat until a machine that can create God is built. Then it's God's problem.

But it must be a US God, otherwise China wins.

17

This isn't a sneer, more of a meta take. Written because I sit in a waiting room and is a bit bored, so I'm writing from memory, no exact quotes will be had.

A recent thread mentioning "No Logo" in combination with a comment in one of the mega-threads that pleaded for us to be more positive about AI got me thinking. I think that in our late stage capitalism it's the consumer's duty to be relentlessly negative, until proven otherwise.

"No Logo" contained a history of capitalism and how we got from a goods based industrial capitalism to a brand based one. I would argue that "No Logo" was written in the end of a longer period that contained both of these, the period of profit driven capital allocation. Profit, as everyone remembers from basic marxism, is the surplus value the capitalist acquire through paying less for labour and resources then the goods (or services, but Marx focused on goods) are sold for. Profits build capital, allowing the capitalist to accrue more and more capital and power.

Even in Marx times, it was not only profits that built capital, but new capital could be had from banks, jump-starting the business in exchange for future profits. Thus capital was still allocated in the 1990s when "No Logo" was written, even if the profits had shifted from the good to the brand. In this model, one could argue about ethical consumption, but that is no longer the world we live in, so I am just gonna leave it there.

In the 1990s there was also a tech bubble were capital allocation was following a different logic. The bubble logic is that capital formation is founded on hype, were capital is allocated to increase hype in hopes of selling to a bigger fool before it all collapses. The bigger the bubble grows, the more institutions are dragged in (by the greed and FOMO of their managers), like banks and pension funds. The bigger the bubble, the more it distorts the surrounding businesses and legislation. Notice how now that the crypto bubble has burst, the obvious crimes of the perpetrators can be prosecuted.

In short, the bigger the bubble, the bigger the damage.

If in a profit driven capital allocation, the consumer can deny corporations profit, in the hype driven capital allocation, the consumer can deny corporations hype. To point and laugh is damage minimisation.

[-] mountainriver@awful.systems 61 points 1 year ago

He appeared to be human, but then they counted his fingers.

view more: next ›

mountainriver

joined 1 year ago