[-] HedyL@awful.systems 7 points 1 week ago

In the past, people had to possess a degree of criminal energy to become halfway convincing scammers. Today, a certain amount of laziness is enough. I'm really glad that at least in one place there are now serious consequences for this.

[-] HedyL@awful.systems 7 points 2 weeks ago* (last edited 2 weeks ago)

This is just naive web crawling: Crawl a page, extract all the links, then crawl all the links and repeat.

It's so ridiculous - supposedly these people have access to a super-smart AI (which is supposedly going to take all our jobs soon), but the AI can't even tell them which pages are worth scraping multiple times per second and which are not. Instead, they appear to kill their hosts like maladapted parasites regularly. It's probably not surprising, but still absurd.

Edit: Of course, I strongly assume that the scrapers don't use the AI in this context (I guess they only used it to write their code based on old Stackoverflow posts). Doesn't make it any less ridiculous though.

[-] HedyL@awful.systems 8 points 2 weeks ago* (last edited 2 weeks ago)

Even if it's not the main topic of this article, I'm personally pleased that RationalWiki is back. And if the AI bots are now getting the error messages instead of me, then that's all the better.

Edit: But also - why do AI scrapers request pages that show differences between versions of wiki pages (or perform other similarly complex requests)? What's the point of that anyway?

[-] HedyL@awful.systems 7 points 3 weeks ago

I'm not surprised that this feature (which was apparently introduced by Canva in 2019) is AI-based in some way. It was just never marketed as such, probably because in 2019, AI hadn't become a common buzzword yet. It was simply called “background remover” because that's what it does. What I find so irritating is that these guys on LinkedIn not only think this feature is new and believe it's only possible in the context of GenAI, but apparently also believe that this is basically just the final stepping stone to AI world domination.

[-] HedyL@awful.systems 7 points 4 weeks ago* (last edited 4 weeks ago)

Of course, it has long been known that some private investors would buy shares in any company just because its name contains letters like “.com” or “blockchain”. However, if a company invests half a billion in an ".ai" company, shouldn't it make sure that the business model is actually AI-based?

Maybe, if we really wanted to replace something with AI, we should start with the VC investors themselves. In this case, we might not actually see any changes for the worse.

Edit: Of course, investors only bear part of the blame if fraud was involved. But the company apparently received a large part of its funding in 2023, following reports of similar lies in as early as 2019. I find it hard to imagine that tech-savvy investors really wouldn't have had a chance to spot the problems earlier.

Edit No. 2: Of course, it is also conceivable that the investors didn't care at all because they were only interested in the baseless hype, which they themselves fueled. But with such large sums of money at stake, I still find it hard to imagine that there was apparently so little due diligence.

[-] HedyL@awful.systems 10 points 4 weeks ago

As all the book authors on the list were apparently real, I guess the "author" of this supplemental insert remembered to google their names and to remove all references to fake books from fake authors made up by AI, but couldn't be bothered to do the same with the book titles (too much work for too little money, I suppose?). And for an author to actually read these books before putting them on a list is probably too much to ask for...

It's also funny how some people seem to justify this by saying that the article is just “filler material” around ads. I don't know, but I believe most people don't buy printed newspapers in order to read nonsensical “filler material” garnished with advertising. The use of AI is a big problem in this case, but not the only one.

[-] HedyL@awful.systems 9 points 1 month ago

For me, everything increasingly points to the fact that the main “innovation” here is the circumvention of copyright regulations. With possibly very erroneous results, but who cares?

[-] HedyL@awful.systems 5 points 2 months ago

To me, those forced Google AI answers are a lot more disconcerting than even all the rest. Sure, publishers always hated content creators, because paying them ate into their profit margins from advertising. However, Google always got most of its content (the indexed webpages) for free anyway, so what exactly was their problem?

Also, how much more energy do these forced AI answers consume, compared with regular search queries? Has anyone done the math?

Furthermore, if many people really loved that feature so much, why not make it opt-in?

At the same time, as many people already pointed out, prioritizing AI-generated answers will probably further disincentivize creators of good original content, which means there will be even less usable material to feed to AI in the future.

Is it really all about pleasing Wall Street? Or about getting people to spend more time on Google itself rather than leave for other websites? Are they really confident that they will all stay and not disappear completely at some point?

[-] HedyL@awful.systems 6 points 7 months ago

I would argue that such things do happen, the cult "Heaven's Gate" probably being one of the most notorious examples. Thankfully, however, this is not a widespread phenomenon.

[-] HedyL@awful.systems 5 points 1 year ago

After reading the review linked above, I have a strong suspicion that this comment really nails it: https://mastodon.cloud/@Jer@chirp.enworld.org/111892343079408715

It's not intended to convince an outsider to believe as it is to shore up the belief of a believer who is starting to doubt.

[-] HedyL@awful.systems 4 points 1 year ago

IIRC, some people pointed out at the time that this particular trade was quite sophisticated - probably orchestrated by people who knew what they were doing and who understood that the hedge fund they wrecked had acted in a particularly dumb/risky way. I guess with this kind of information, another hedge fund could have pulled that off just as easily as the people from WSB on Reddit.

[-] HedyL@awful.systems 7 points 1 year ago* (last edited 1 year ago)

Hedge fund managers (and their staff) can read reddit, of course, and they can even participate and - for example - manipulate people into betting on a stock they themselves have a "leveraged long" bet on (or desperately need to dump for whatever reason). It's important to remember that those with the deepest pockets are very likely to win here, and also that hedge funds (and other institutional investors) might have deep pockets in part because they use investment money from everybody's pension funds. In rare cases (such as Gamestop) they may be taken by surprise, but only when there is a very specific attack from an angle they didn't expect, which is hard to replicate systematically IMHO.

Traditional collective action is successful mainly because actual people show up (or refuse to show up at their workplaces) in large numbers. IMHO it's impossible to replicate that via accounts on a trading app and anonymous sock puppets. This is simply not how financial markets work.

Also, if this gets more people addicted to gambling on the stock market (which obviously happened), Wall Street is going to win either way through fees etc.

IMHO, the only way to "win" here is not to play.

view more: ‹ prev next ›

HedyL

joined 2 years ago