We've had this thing hammering our servers. The scraper uses randomized user-agents browser/OS combinations and comes from a number of distinct IP ranges in different datacenters around the world, but all the IPs track back to Bytedance.
Wouldn't be surprised if they're just cashing out while TikTok is still public in the US. One last desperate grab at value-add for the parent company before the shut down.
Also a great way to burn the infrastructure for subsequent use. After this, you can guarantee every data security company is going to add the TikTok servers to their firewalls and blacklists. So the American company that tries to harvest the property is going to be tripping over these legacy bullwarks for years after.
This has nothing to do with Tik Tok other than ByteDance being a shareholder in Tik Tok
Also it doesn't respect robots.txt
(the file that tells bots whether or not a given page can be accessed) unlike most AI scrapping bots.
My personal website that primarily functions as a front end to my home server has been getting BEAT by these stupid web scrapers. Every couple of days the server is unusable because some web scraper demanded every single possible page and crashed the damn thing
I do the same thing, and I've noticed my modem has been absolutely bricked probably 3-4 times this month. I wonder if this is why.
Not surprising that Bytedance would want to gobble up every bit of data they can as fast as possible.
Google’s mission statement was originally something about controlling the world’s data. If Google has competition, that might be a good thing?
Yeah, but we were hoping for competition that wasn't worse than google...
As for what ByteDance plans to do with a new LLM, a person familiar with the company’s ambitions said one goal has to do with the search function for TikTok.
Last week, TikTok released an update to its current search function focused on [keywords for ads], basically allowing advertisers to search in real time for words that are trending on TikTok. It allows marketers to build an ad with relevant keywords that would ostensibly help the ad show up on the screens of more users.
…
“Given the audience and the amount of use, TikTok with a search environment that is a completely biddable space with keywords and topics, that would be very interesting to a lot of people spending a ton of money with Google right now,” the person said.
A dark vision just flashed in my mind. And I am certain this is what will happen. AI-generated ads done in real time based on the latest “trending” thing. Presented to users basically as soon as the topic has the slightest amount of “trend”.
Just emitting untold amounts of CO2 to show you generated ads in near real time.
No wonder Google ex-CEO was saying fuck climate goals.
This is fine. I support archiving the Internet.
It kinda drives me crazy how normalized anti-scraping rhetoric is. There is nothing wrong with (rate limited) scraping
The only bots we need to worry about are the ones that POST, not the ones that GET
It’s not fine. They are not archiving the internet.
I had to ban their user agent after very aggressive scraping that would have taken down our servers. Fuck this shitty behaviour.
I had to block ByteSpider at work because it can't even parse HTML correctly and just hammers the same page and accounts to sometimes 80% of the traffic hitting a customer's site and taking it down.
The big problem with AI scrapers is unlike Google and traditional search engines, they just scrape so aggressively. Even if it's all GETs, they hit years old content that's not cached and use up the majority of the CPU time on the web servers.
Scraping is okay, using up a whole 8 vCPU instance for days to feed AI models is not. They even actively use dozens of IPs to bypass the rate limits too, so theyre basically DDoS'ing whoever they scrape with no fucks given. I've been woken up by the pager way too often due to ByteSpider.
My next step is rewriting all the content with GPT-2 and serving it to bots so their models collapse.
I think a common nginx config is to just redirect malicious bots to some well-cached terrabyte file. I think hetzner hosts one iirc
https://github.com/iamtraction/ZOD
42kB ZIP file which decompresses into 4.5 PB.
Bytedance ain’t looking to build an archival tool. This is to train gen AI models.
Bullshit. This bot doesn't identify itself as a bot and doesn't rate limit itself to anything that would be an appropriate amount. We were seeing more traffic from this thing that all other crawlers combined.
I can not contribute to anything here, I just came to say I really really like the phrase "gobbling something up" :D
from the article:
Robots.txt is a line of code that publishers can put into a website that, while not legally binding in any way, is supposed to signal to scraper bots that they cannot take that website’s data.
i do understand that robots.txt is a very minor part of the article, but i think that’s a pretty rough explanation of robots.txt
Out of curiosity, how would you word it?
i would probably word it as something like:
Robots.txt is a document that specifies which parts of a website bots are and are not allowed to visit. While it’s not a legally binding document, it has long been common practice for bots to obey the rules listed in robots.txt.
in that description, i’m trying to keep the accessible tone that they were going for in the article (so i wrote “document” instead of file format/IETF standard), while still trying to focus on the following points:
- robots.txt is fundamentally a list of rules, not a single line of code
- robots.txt can allow bots to access certain parts of a website, it doesn’t have to ban bots entirely
- it’s not legally binding, but it is still customary for bots to follow it
i did also neglect to mention that robots.txt allows you to specify different rules for different bots, but that didn’t seem particularly relevant here.
Another fucking CCP and PLA creation.
Every major ai company did this let them do that what is to loose here?
People like to act as if archiving has never been a thing until about a year ago at which point it was suddenly invented and is now a threat in some nebulous way.
It's not that it's a threat, it's that there's a difference between archiving for preservation and crawling other people's content for the purpose of making money off it (in a way that does not benefit the content creator).
If a foreign Dictatorship's military op wants to know every facet of your life, then you can be damn sure it's a threat.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.