Safari needs a tick in “copy urls without site tracking” since ios17 and macOS Sonoma
https://www.macrumors.com/how-to/remove-tracking-information-urls-safari/
Safari needs a tick in “copy urls without site tracking” since ios17 and macOS Sonoma
https://www.macrumors.com/how-to/remove-tracking-information-urls-safari/
I wish premium would stop trying to add more stuff. I only want ad free videos, just do that, rather than adding loads of crap no one wants and then charging more because of the ‘increased value’.
Oh great, another key to accidentally press when I’m in a game.
please take the time to absorb the magnitude and importance of what we’re all a part of
What that then? A fairly uninteresting and underperforming social media platform that hasn’t grown in years and is now falling apart. Woo, such importance.
From the NYT reporting
Although Mr. Musk acknowledged that an extended boycott could bankrupt X, he suggested that the public would blame the brands rather than him for its collapse.
lol, yeah, sure.
It’s known performance will be poor, but if it was that bad the ton of YouTubers doing their preview coverage would have been reporting it.
Actually GDPR says yes. “Legitimate Interest” means things like security, anti fraud etc.
For anyone wanting a deep dive into how much of a scum bag this guy and his brother are then I can highly recommend the recent series on YouTube by ‘Common Sense Skeptic’ https://youtube.com/@commonsenseskeptic?si=AqKSiA55JOeTJeRo
I’d recommend anyone interested in the Voyager program to check out “It’s Quieter in the Twilight”. A film about the people involved in the project and how they’ve dedicated their lives to make it happen.
Yeah. There must be tons of places that would choose Slack, or another alternative, over Teams if they weren’t getting Teams bundled into a piece they were already paying.
Plus all the energy used in developing the feature in the first place. What a crock.
My CDN bill recently went from about $5 a month to over $200. Turned out it was Tictok’s spider relentlessly scraping the same content over and over again.
It was ignoring robots.txt. In the end I just had to ban their user agent in the CDN config.