[-] gerikson@awful.systems 4 points 1 day ago

I'd wager an ounce of gold that the general attitude towards sustainability and the environment is 100% aligned among the rulers of both states.

[-] gerikson@awful.systems 4 points 2 days ago* (last edited 1 day ago)

Dubai famously doesn't have a sewage pipe system, human waste is loaded onto tanker trucks that spend hours waiting to offload it in the only sewage treatment plant available.

[-] gerikson@awful.systems 15 points 4 days ago* (last edited 3 days ago)

A rival gang of "AI" "researchers" dare to make fun of Big Yud's latest book and the LW crowd are Not Happy

Link to takedown: https://www.mechanize.work/blog/unfalsifiable-stories-of-doom/ (hearbreaking : the worst people you know made some good points)

When we say Y&S’s arguments are theological, we don’t just mean they sound religious. Nor are we using “theological” to simply mean “wrong”. For example, we would not call belief in a flat Earth theological. That’s because, although this belief is clearly false, it still stems from empirical observations (however misinterpreted).

What we mean is that Y&S’s methods resemble theology in both structure and approach. Their work is fundamentally untestable. They develop extensive theories about nonexistent, idealized, ultrapowerful beings. They support these theories with long chains of abstract reasoning rather than empirical observation. They rarely define their concepts precisely, opting to explain them through allegorical stories and metaphors whose meaning is ambiguous.

Their arguments, moreover, are employed in service of an eschatological conclusion. They present a stark binary choice: either we achieve alignment or face total extinction. In their view, there’s no room for partial solutions, or muddling through. The ordinary methods of dealing with technological safety, like continuous iteration and testing, are utterly unable to solve this challenge. There is a sharp line separating the “before” and “after”: once superintelligent AI is created, our doom will be decided.

LW announcement, check out the karma scores! https://www.lesswrong.com/posts/Bu3dhPxw6E8enRGMC/stephen-mcaleese-s-shortform?commentId=BkNBuHoLw5JXjftCP

Update an LessWrong attempts to debunk the piece with inline comments here

https://www.lesswrong.com/posts/i6sBAT4SPCJnBPKPJ/mechanize-work-s-essay-on-unfalsifiable-doom

Leading to such hilarious howlers as

Then solving alignment could be no easier than preventing the Germans from endorsing the Nazi ideology and commiting genocide.

Ummm pretty sure engaging in a new world war and getting their country bombed to pieces was not on most German's agenda. A small group of ideologues managed to sieze complete control of the state, and did their very best to prevent widespread knowledge of the Holocaust from getting out. At the same time they used the power of the state to ruthlessly supress any opposition.

rejecting Yudkowsky-Soares' arguments would require that ultrapowerful beings are either theoretically impossible (which is highly unlikely)

ohai begging the question

[-] gerikson@awful.systems 11 points 5 days ago

LOL @ promptfondlers in comments

[-] gerikson@awful.systems 8 points 6 days ago* (last edited 6 days ago)

Nice time - here's a Swedish dude who constructed a 8m parabolic dish to do EME by hand

News item in Swedish: https://www.svt.se/nyheter/lokalt/uppsala/byggde-atta-meter-parabol-for-att-prata-via-manen

Earth-Moon-Earth: https://en.wikipedia.org/wiki/Earth%E2%80%93Moon%E2%80%93Earth_communication (bonus illustration obviously taken from a primary school science project) Edit malus for long passage in second section second para obviously originally written by a Nazi

10
submitted 1 year ago* (last edited 1 year ago) by gerikson@awful.systems to c/notawfultech@awful.systems

current difficulties

  1. Day 21 - Keypad Conundrum: 01h01m23s
  2. Day 17 - Chronospatial Computer: 44m39s
  3. Day 15 - Warehouse Woes: 30m00s
  4. Day 12 - Garden Groups: 17m42s
  5. Day 20 - Race Condition: 15m58s
  6. Day 14 - Restroom Redoubt: 15m48s
  7. Day 09 - Disk Fragmenter: 14m05s
  8. Day 16 - Reindeer Maze: 13m47s
  9. Day 22 - Monkey Market: 12m15s
  10. Day 13 - Claw Contraption: 11m04s
  11. Day 06 - Guard Gallivant: 08m53s
  12. Day 08 - Resonant Collinearity: 07m12s
  13. Day 11 - Plutonian Pebbles: 06m24s
  14. Day 18 - RAM Run: 05m55s
  15. Day 04 - Ceres Search: 05m41s
  16. Day 23 - LAN Party: 05m07s
  17. Day 02 - Red Nosed Reports: 04m42s
  18. Day 10 - Hoof It: 04m14s
  19. Day 07 - Bridge Repair: 03m47s
  20. Day 05 - Print Queue: 03m43s
  21. Day 03 - Mull It Over: 03m22s
  22. Day 19 - Linen Layout: 03m16s
  23. Day 01 - Historian Hysteria: 02m31s
13
submitted 1 year ago* (last edited 1 year ago) by gerikson@awful.systems to c/notawfultech@awful.systems

Problem difficulty so far (up to day 16)

  1. Day 15 - Warehouse Woes: 30m00s
  2. Day 12 - Garden Groups: 17m42s
  3. Day 14 - Restroom Redoubt: 15m48s
  4. Day 09 - Disk Fragmenter: 14m05s
  5. Day 16 - Reindeer Maze: 13m47s
  6. Day 13 - Claw Contraption: 11m04s
  7. Day 06 - Guard Gallivant: 08m53s
  8. Day 08 - Resonant Collinearity: 07m12s
  9. Day 11 - Plutonian Pebbles: 06m24s
  10. Day 04 - Ceres Search: 05m41s
  11. Day 02 - Red Nosed Reports: 04m42s
  12. Day 10 - Hoof It: 04m14s
  13. Day 07 - Bridge Repair: 03m47s
  14. Day 05 - Print Queue: 03m43s
  15. Day 03 - Mull It Over: 03m22s
  16. Day 01 - Historian Hysteria: 02m31s
7

The previous thread has fallen off the front page, feel free to use this for discussions on current problems

Rules: no spoilers, use the handy dandy spoiler preset to mark discussions as spoilers

[-] gerikson@awful.systems 42 points 1 year ago

I think we can all agree now that US Rationalists are basically all ex-Christians who are looking for the same thing but with the serial numbers filed off.

[-] gerikson@awful.systems 81 points 1 year ago

This is the inevitable evolution of climate change denialism - accept it's real and happening, just that it's too late / too expensive to do anything about it.

[-] gerikson@awful.systems 75 points 1 year ago

"tenant of white supremacy"

White Supremacy is the worst landlord.

[-] gerikson@awful.systems 46 points 1 year ago

As for the prospect that AI will enable business users to do more with fewer humans in the office or on the factory floor, the technology generates such frequent errors that users may need to add workers just to double-check the bots’ output.

And here we were worrying about being out of work...

17
9
19
40

This season's showrunners are so lazy, just re-using the same old plots and antagonists.

93
71

“It is soulless. There is no personality to it. There is no voice. Read a bunch of dialogue in an AI generated story and all the dialogue reads the same. No character personality comes through,” she said. Generated text also tends to lack a strong sense of place, she’s observed; the settings of the stories are either overly-detailed for popular locations, or too vague, because large language models can’t imagine new worlds and can only draw from existing works that have been scraped into its training data.

[-] gerikson@awful.systems 41 points 2 years ago

I wish I could say "let them fight" but this is bad for the environment and bad for productive uses of energy so it's more a "whoever wins, we lose" situation.

25
[-] gerikson@awful.systems 57 points 2 years ago* (last edited 2 years ago)

Normal person: an LLM is trained on publicly available images of MRIs, most with tumors, so presenting an image of any MRI will naturally generate text related to brain tumor descriptions.

Brain-addled prompt fondlers: clearly this response proves Claude is more intelligent than any doctor.

28
submitted 2 years ago* (last edited 2 years ago) by gerikson@awful.systems to c/techtakes@awful.systems

The grifters in question:

Jeremie and Edouard Harris, the CEO and CTO of Gladstone respectively, have been briefing the U.S. government on the risks of AI since 2021. The duo, who are brothers [...]

Edouard's website: https://www.eharr.is/, and on LessWrong: https://www.lesswrong.com/users/edouard-harris

Jeremie's LinkedIn: https://www.linkedin.com/in/jeremieharris/

The company website: https://www.gladstone.ai/

45
submitted 2 years ago* (last edited 2 years ago) by gerikson@awful.systems to c/techtakes@awful.systems

HN reacts to a New Yorker piece on the "obscene energy demands of AI" with exactly the same arguments coiners use when confronted with the energy cost of blockchain - the product is valuable in of itself, demands for more energy will spur investment in energy generation, and what about the energy costs of painting oil on canvas, hmmmmmm??????

Maybe it's just my newness antennae needing calibrating, but I do feel the extreme energy requirements for what's arguably just a frivolous toy is gonna cause AI boosters big problems, especially as energy demands ramp up in the US in the warmer months. Expect the narrative to adjust to counter it.

[-] gerikson@awful.systems 88 points 2 years ago

I believe the scientific consensus is that it originated in a wet market in Wuhan.

The "lab leak theory", while not impossible, is also shorthand for a morass of conspiracy theories grounded in racist attitudes towards China. It somehow conflates that the pandemic is China's fault, if not an outright attack from China, while simultaneously downplaying any efforts to mitigate such an attack.

view more: next ›

gerikson

joined 2 years ago