[-] HedyL@awful.systems 7 points 1 week ago

In the past, people had to possess a degree of criminal energy to become halfway convincing scammers. Today, a certain amount of laziness is enough. I'm really glad that at least in one place there are now serious consequences for this.

[-] HedyL@awful.systems 7 points 1 week ago* (last edited 1 week ago)

This is just naive web crawling: Crawl a page, extract all the links, then crawl all the links and repeat.

It's so ridiculous - supposedly these people have access to a super-smart AI (which is supposedly going to take all our jobs soon), but the AI can't even tell them which pages are worth scraping multiple times per second and which are not. Instead, they appear to kill their hosts like maladapted parasites regularly. It's probably not surprising, but still absurd.

Edit: Of course, I strongly assume that the scrapers don't use the AI in this context (I guess they only used it to write their code based on old Stackoverflow posts). Doesn't make it any less ridiculous though.

[-] HedyL@awful.systems 8 points 1 week ago* (last edited 1 week ago)

Even if it's not the main topic of this article, I'm personally pleased that RationalWiki is back. And if the AI bots are now getting the error messages instead of me, then that's all the better.

Edit: But also - why do AI scrapers request pages that show differences between versions of wiki pages (or perform other similarly complex requests)? What's the point of that anyway?

[-] HedyL@awful.systems 10 points 2 weeks ago

Under the YouTube video, somebody just commented that they believe that in the end, the majority of people is going to accept AI slop anyway, because that's just how people are. Maybe they're right, but to me it seems that sometimes, the most privileged people are the ones who are the most impressed by form over substance, and this seems to be the case with AI at the moment. I don't think this necessarily applies to the population as a whole, though. The possibility that oligopolistic providers such as Google might eventually leave them with no other choice by making reliable search results almost unreachable is another matter.

[-] HedyL@awful.systems 7 points 2 weeks ago

I'm not surprised that this feature (which was apparently introduced by Canva in 2019) is AI-based in some way. It was just never marketed as such, probably because in 2019, AI hadn't become a common buzzword yet. It was simply called “background remover” because that's what it does. What I find so irritating is that these guys on LinkedIn not only think this feature is new and believe it's only possible in the context of GenAI, but apparently also believe that this is basically just the final stepping stone to AI world domination.

[-] HedyL@awful.systems 7 points 3 weeks ago* (last edited 3 weeks ago)

Of course, it has long been known that some private investors would buy shares in any company just because its name contains letters like “.com” or “blockchain”. However, if a company invests half a billion in an ".ai" company, shouldn't it make sure that the business model is actually AI-based?

Maybe, if we really wanted to replace something with AI, we should start with the VC investors themselves. In this case, we might not actually see any changes for the worse.

Edit: Of course, investors only bear part of the blame if fraud was involved. But the company apparently received a large part of its funding in 2023, following reports of similar lies in as early as 2019. I find it hard to imagine that tech-savvy investors really wouldn't have had a chance to spot the problems earlier.

Edit No. 2: Of course, it is also conceivable that the investors didn't care at all because they were only interested in the baseless hype, which they themselves fueled. But with such large sums of money at stake, I still find it hard to imagine that there was apparently so little due diligence.

[-] HedyL@awful.systems 10 points 3 weeks ago

As all the book authors on the list were apparently real, I guess the "author" of this supplemental insert remembered to google their names and to remove all references to fake books from fake authors made up by AI, but couldn't be bothered to do the same with the book titles (too much work for too little money, I suppose?). And for an author to actually read these books before putting them on a list is probably too much to ask for...

It's also funny how some people seem to justify this by saying that the article is just “filler material” around ads. I don't know, but I believe most people don't buy printed newspapers in order to read nonsensical “filler material” garnished with advertising. The use of AI is a big problem in this case, but not the only one.

[-] HedyL@awful.systems 9 points 1 month ago

For me, everything increasingly points to the fact that the main “innovation” here is the circumvention of copyright regulations. With possibly very erroneous results, but who cares?

[-] HedyL@awful.systems 12 points 2 months ago

FWIW, years ago, some people who worked for a political think tank approached me for expert input. They subsequently published a report that cited many of the sources I had mentioned, but their recommendations in the report were exactly the opposite of what the cited sources said (and what I had told them myself). As far as I know, there was no GenAI at the time. I think these people were simply betting that no one would check the sources.

This is not to defend the use of AI, on the contrary - I think this shows quite well what sort of people would use such tools.

[-] HedyL@awful.systems 11 points 9 months ago

From the original article:

Crivello told TechCrunch that out of millions of responses, Lindy only Rickrolled customers twice.

Yes, but how many of them received other similarly "useful" answers to their questions?

[-] HedyL@awful.systems 7 points 1 year ago* (last edited 1 year ago)

Hedge fund managers (and their staff) can read reddit, of course, and they can even participate and - for example - manipulate people into betting on a stock they themselves have a "leveraged long" bet on (or desperately need to dump for whatever reason). It's important to remember that those with the deepest pockets are very likely to win here, and also that hedge funds (and other institutional investors) might have deep pockets in part because they use investment money from everybody's pension funds. In rare cases (such as Gamestop) they may be taken by surprise, but only when there is a very specific attack from an angle they didn't expect, which is hard to replicate systematically IMHO.

Traditional collective action is successful mainly because actual people show up (or refuse to show up at their workplaces) in large numbers. IMHO it's impossible to replicate that via accounts on a trading app and anonymous sock puppets. This is simply not how financial markets work.

Also, if this gets more people addicted to gambling on the stock market (which obviously happened), Wall Street is going to win either way through fees etc.

IMHO, the only way to "win" here is not to play.

[-] HedyL@awful.systems 12 points 1 year ago

I vividly remember how, in the days of Gamestop, even normally reasonable people on the left bought into that "Sticking it to Wall Street" narrative. Sadly, I'm not surprised at how things turned out.

view more: ‹ prev next ›

HedyL

joined 2 years ago