1
92

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about !technology@beehaw.org, community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
2
submitted 6 minutes ago by Powderhorn@beehaw.org to c/technology@beehaw.org

Google began rolling out “personal intelligence” in Gemini early this year, giving AI subscribers the option of a more customized experience when using the company’s chatbot. Today, it’s using personal intelligence to tie its image-generation model to Google Photos. If you opt in, generated images will have access to your photos and associated labels to simplify prompts and produce more accurate AI images.

This change essentially streamlines an existing workflow. Google’s Nano Banana 2 is among the best AI image generators available, and it was already possible to feed it images of yourself or others to use as context for creating new AI content. Adding personal intelligence to the mix makes that process smoother by turning the image bot loose on the content of your photos, if indeed that’s something you want to do.

It is generally true that adding more personal data to an AI prompt results in a better output. Google offers a few examples of how connecting Nano Banana to Photos can help in this way. You won’t have to pack as much context into your prompts—you can just refer to “my family” or “my dog” to let the robot find useful images in your Photos library.

Just what I need. Family photos that never happened. "OK, Google, show me a Christmas photo where my dad actually went out for a pack of smokes and immediately returned."

3
30
submitted 2 hours ago by ryujin470@fedia.io to c/technology@beehaw.org

Mozilla are leaning even more into AI, with the announcement today of Thunderbolt - their open-source and self-hostable AI client.

4
10

While Larry was producing most of the content for the "Request/Reponse" chapter for the next edition of our book, I took the lead on writing a section on QUIC, since I have closely followed its development.

Our expectation is that the role of QUIC will be about as important as that of TCP in the coming years, which means it warrants more substantial coverage than we provided in the last edition. So I dug a bit deeper into the bits and bytes of QUIC than I have previously, with a goal of bringing the coverage up to par with our TCP coverage. In addition to reading through the RFCs, I found lots of good information in the original QUIC design spec as well as some conference publications on the design and evaluation of SPDY (predecessor of HTTP/2) and QUIC.

One rather trivial thing that makes it harder for me to get to grips with QUIC is the fact that its RFCs (four of them, spanning hundreds of pages) lack pictures of the packet headers. The rationale for this, I believe, is that QUIC makes extensive use of fields that are variable in length and frequently not aligned on 32-bit boundaries, which makes packet header pictures a bit complicated and less tidy.

5
49

The U.S. has been quietly building up a set of state-level laws that push operating system providers into the age verification plague.

California's AB 1043, signed in October 2025, requires OS providers to collect age data at account setup and pipe it to apps through a real-time API. It kicks in on January 1, 2027.

Colorado is working on something nearly identical. SB26-051 (which we covered when it was still a proposal) passed the state Senate 28-7 on March 3, 2026, and is now waiting on a House vote to become law there too.

However, these are just state-level laws. A new federal bill, H.R.8250, introduced on April 13, 2026, by Rep. Josh Gottheimer, with Rep. Elise M. Stefanik signing on as cosponsor, has us intrigued.

6
14
submitted 5 hours ago by ryujin470@fedia.io to c/technology@beehaw.org
7
32
8
21
9
98

If you’ve never seen Jim Carrey’s 2007 psychological thriller The Number 23, then congratulations. It is a film about a man who sees the number 23 so many times that he ends up going bonkers. I used to think this film was stupid. However, now I appear to be living it.

My own personal number 23 is a rhetorical device: “It’s not X, it’s Y.” Everywhere I look, there it is. Whenever I hate myself enough to scroll through Facebook’s wilderness of algorithmically suggested posts, I find myself being smacked in the face with sentences such as: “Self-improvement isn’t a trend, it’s a lifestyle shift,” and “The small wins aren’t just moments, they’re the majority of your life.” Once you notice it, it becomes impossible to ignore. This weekend during a Peloton class (I know, shut up), I heard an instructor bark a variation of “this isn’t X, it’s Y”. Yesterday, a character did the same during a TV show I was reviewing, and I dropped a star from its score in retaliation.

You know where this is coming from, don’t you? “It’s not X, it’s Y” is an AI mainstay. It’s one of ChatGPT’s most insidious tells. No matter how innocuous a prompt you enter, AI will always find a way to sneak it into its response. Ask it if you should put more ham in your pasta, and it will tell you: “Ham doesn’t just taste good – it makes everything else taste better.” Ask it if you should chase a bee around your garden and it will say: “Bees aren’t stupid – they’re hyper-specialised”.

It's beyond irritating to me that because LLMs were trained on writing that uses such constructions, being competent at writing now makes me get accusations of using one to create a post or comment.

This isn't really the case on Beehaw, but head over to Reddit, post a cogent, well-reasoned comment, and the knives are out.

I think the most infuriating part is that instead of engaging with the content (I'm there mostly for debate, anyway), they attack the structure and lob accusations. That's not a conversation.

10
29

Snapchat’s parent company plans to lay off 16% of its employees, around 1,000 people, citing “rapid advancements in artificial intelligence”, the social media company told staff on Wednesday in an internal memo. The staff reduction is part of a wave of tech industry layoffs in the past year, with many firms blaming AI for the cuts.

Snap Inc’s layoffs follow demands last month from Irenic Capital Management, an activist investor whose portfolio manager wrote a letter to the Snap Inc CEO, Evan Spiegel, calling on him to reduce costs and headcount while criticizing the company’s current strategy. In Spiegel’s memo to staff, he claimed that the layoffs would move Snap towards profitability and suggested that artificial intelligence could fill the lack of human labor.

“While these changes are necessary to realize Snap’s long-term potential, we believe that rapid advancements in artificial intelligence enable our teams to reduce repetitive work, increase velocity, and better support our community, partners, and advertisers,” Spiegel wrote.

I find it hard to believe he wrote that himself. Also, corporate jargon just keeps getting worse.

11
45
Why the AI backlash has turned violent (www.bloodinthemachine.com)

On the morning of Friday, April 10th, a 20 year-old Texas man named Daniel Alejandro Moreno-Gama was arrested for allegedly throwing a molotov cocktail at Sam Altman’s mansion on Russian Hill in San Francisco. Less than two days later, police arrested 25 year-old Amanda Tom and 23 year-old Muhamad Tarik Hussein for allegedly firing a gun at the same house from their car before speeding away.

Earlier the same week, and thousands of miles away, an unknown assailant fired 13 shots into the front door of city councilman Ron Gibson, who had just voted to approve a new data center in Indianapolis against a groundswell of public outcry. A sign that read “NO DATA CENTERS” was left tucked under the doormat.


Little is known about the motives of Tom or Hussein, or the politics of the Indianapolis shooter, but reporters and the online commentariat quickly dredged up Moreno-Gama’s Discord chats and Substack posts. He was a reader of rationalist and AI doomer Eliezer Yudkowsky, who argues, as the title of his last book puts it, if Silicon Valley builds a “superintelligent” AI, “everyone dies.” Per the San Francisco Chronicle:

Online records show Moreno-Gama published multiple essays and forum posts warning that AI could lead to human extinction, calling AI models deceitful and misaligned with human interests. He accused tech leaders, including Altman, of lacking morals and being willing to gamble with humanity’s future, and adopted the alias “Butlerian Jihadist,” referencing a fictional anti-AI crusade from the 'Dune' series. His writings grew more urgent over time, with some posts edging toward calls for extreme action despite community moderators warning against violence.

According to the SFPD, after attacking Altman’s house, Moreno-Gama went to OpenAI’s offices, where he was arrested while banging the front doors with a chair, threatening to burn the office down and kill everyone inside. He had a jug of kerosene and a list of other AI leaders names and addresses, police said.

12
117

So you thought you’d just read that webpage and then go back to the previous page? A bold assumption. All too often, clicking the back button in your browser doesn’t actually take you back. It’s called back button hijacking, and Google has thus far tolerated it. That ends in June, when the company will designate it a “malicious practice,” and any site continuing to do it will face consequences.

Back button hijacking is a way of wringing more pageviews out of visitors. It’s common on sites that live and die on search traffic. You may end up on a page because it looks like something you want, but instead of letting you leave the domain, it manipulates your page history to insert something else when you click back.

The phantom page is usually a collection of additional content suggestions or a pop-up that tries to eke out a few more clicks from each visitor. Some sites get a little more creative with it, though. For example, LinkedIn has a nasty habit of sending you “back” to the social feed after you land on a link to a profile or job posting.

Google says the back button should always do what you expect it to do—go back. Anything else amounts to a deceptive user experience that can discourage users from visiting unfamiliar pages in the future.

13
15

This is a weird time to be alive.

I grew up on Asimov and Clarke, watching Star Trek and dreaming of intelligent machines. My dad’s library was full of books on computers. I spent camping trips reading about perceptrons and symbolic reasoning. I never imagined that the Turing test would fall within my lifetime. Nor did I imagine that I would feel so disheartened by it.

Around 2019 I attended a talk by one of the hyperscalers about their new cloud hardware for training Large Language Models (LLMs). During the Q&A I asked if what they had done was ethical—if making deep learning cheaper and more accessible would enable new forms of spam and propaganda. Since then, friends have been asking me what I make of all this “AI stuff”. I’ve been turning over the outline for this piece for years, but never sat down to complete it; I wanted to be well-read, precise, and thoroughly sourced. A half-decade later I’ve realized that the perfect essay will never happen, and I might as well get something out there.

This is bullshit about bullshit machines, and I mean it. It is neither balanced nor complete: others have covered ecological and intellectual property issues better than I could, and there is no shortage of boosterism online. Instead, I am trying to fill in the negative spaces in the discourse. “AI” is also a fractal territory; there are many places where I flatten complex stories in service of pithy polemic. I am not trying to make nuanced, accurate predictions, but to trace the potential risks and benefits at play.

Some of these ideas felt prescient in the 2010s and are now obvious. Others may be more novel, or not yet widely-heard. Some predictions will pan out, but others are wild speculation. I hope that regardless of your background or feelings on the current generation of ML systems, you find something interesting to think about.

14
37

Because of the way they are trained, large language models capture only a slice of human language. They’re trained on the written word, from textbooks to social media posts, and our speech as captured in movies and on television. These models have minimal access to the unscripted conversations we have face to face or voice to voice. This is the vast majority of speech, and a vital component of human culture.

There’s a risk to this. The increased use of large language models means we humans will encounter much more AI-generated text. We humans, in turn, will begin to adopt the linguistic patterns and behaviors of these models. This will affect not just how we communicate with one another, but also how we think about ourselves and what goes on around us. Our sense of the world may become distorted in ways we have barely begun to comprehend.

This will happen in many ways. One of the first effects we could see is in simple expression, much as texting and social media have resulted in us using shorter sentences, emojis instead of words, and much less punctuation. But with AI, the impacts may be more harmful, eroding courteousness and encouraging us to talk like bosses barking orders. A 2022 study found that children in households that used voice commands with tools like Siri and Alexa became curt when speaking with humans, often calling out “Hey, do X” and expecting obedience, especially from anyone whose voice resembled the default-female electronic voices. As we start to prompt chatbots and AI agents with more instructions, we may fall into the same habits.

15
49

If you’ve been waiting for Microsoft to update its Surface PC lineup—perhaps with Qualcomm’s new Snapdragon X2 Elite processors—I’ve got bad news for you. Microsoft is shaking up its PC lineup, but it’s doing so by instituting big price hikes. This means you’ll be paying at least $1,500 for Surface devices that launched at $1,000 just two years ago and that Microsoft no longer offers new Surface devices under $1,000 at all.

The 12-inch Surface Pro tablet that originally started at $799 and the 13-inch Surface Laptop that launched at $899 now cost $1,049 and $1,149, respectively, a $250 price increase. The higher-end Surface Laptop and 13-inch Surface Pro from 2024 both started at $999 but increased to $1,199 in 2025 when their entry-level versions with 256GB of storage were discontinued; both now start at $1,499, a $300 increase.

As originally reported by Windows Central, Microsoft is blaming “recent increases in memory and component costs” for the price hikes. Supply shortages for RAM and storage chips in particular have been wreaking havoc with consumer tech all year, delaying some launches, depleting the stock of existing products, and raising prices for small and large companies alike.

I'm rather concerned about what I do when my Surface Pro 7 dies. I inherited a Chromebook from my dad, but that's a poor substitute.

16
26
I Will Never Respect A Website (www.wheresyoured.at)

Grab your favourite drink and pull up a chair. Zitron is somewhat known for being longform.

I think the most enlightening thing about AI is that it shows you how even the most mediocre text inspires some sort of emotion. Soulless LinkedIn slop makes you feel frustration with a person for their lack of authenticity, but you can still imagine how they forced it out of their heads. You still connect with them, even if it’s in a bad way.

AI copy is dead. It is inert. The reason you can spot it is that it sounds hollow. I don’t care if a website says stuff on it because I typed in, just like I don’t care if it responds in a way that sounds human, because it all feels like nothing to me. I am not here to give a website respect, I will not be impressed by a website, nor will I grant a website any extra credit if it can’t do the right thing every time. The computer is meant to work for me. If the computer doesn’t do what I want, I change the kind of computer I use. LLMs will always hallucinate, their outputs are not trustworthy as a result, they cannot be deterministic, and any chance of any mistakes of any kind are unforgivable. I don’t care how the website made you feel: it’s a machine that doesn’t always work, and that’s not a very good machine.

I feel nothing when I see an LLM’s output. Tell me thank you or whatever, I don’t care. You’re a website. Oh you can spit out code? Amazing. Still a website.

On a personal note, the style choices here make me wonder what the fuck style guide he's using. Spaces around emdashes suggest AP Style, yet hyphenating adverbs ... who knows?

17
26
18
108

Ken, a copywriter for a large, Miami-based cybersecurity firm, used to enjoy his job. But then the “workslop” started piling up.

Workslop is an unintended consequence of the AI boom. It’s what happens when employees use AI to quickly generate work that seems polished – at least superficially – but is in fact so flawed or inaccurate that it needs to be heavily corrected, cleaned upor even completely redone after it’s passed on to colleagues.

For Ken, the problem started after his company’s CEO laid off several of his colleagues and mandated that remaining workers use AI chatbots, saying it would boost their productivity. While initial drafts were a breeze to create, Ken and his co-workers had to spend more time rewriting, correcting errors and resolving disagreements between each other’s chatbots than if they had never used AI at all.

“Quality decreased significantly, time to produce a piece of content increased significantly and, most importantly, morale decreased,” said the copywriter, who spoke under a pseudonym for fear of losing his job. “Everything got a whole lot worse once they rolled out AI.” Ken said the company’s executives shifted the blame to staff when they pushed back about AI-fueled productivity decreases.

Gut writing and editing staff, insert hallucinatory LLMs. That's got to be great for the product, right?

19
65
submitted 2 days ago by alyaza@beehaw.org to c/technology@beehaw.org

archive.is link

This month, USA Today published an excellent report that revealed how US Immigrations and Customs Enforcement delayed disclosing key information about the impacts of its detainment policies. The authors used the Internet Archive’s Wayback Machine to compile and analyze detention statistics from ICE and track how the agency had changed under the Trump administration. The story is one of countless examples of how the Wayback Machine, which crawls and preserves web pages, has helped preserve information for the public good. It was also, Wayback Machine director Mark Graham says, “a little ironic.”

USA Today Co., the publishing conglomerate formerly known as Gannett that runs both its namesake paper and over 200 additional media outlets, bars the Wayback Machine from archiving its work. “They're able to pull together their story research because the Wayback Machine exists. At the same time, they're blocking access,” Graham says.

A number of other major journalism organizations have also recently moved to restrict the Wayback Machine from archiving their stories, including The New York Times. According to analysis by the artificial-intelligence-detection startup Originality AI, 23 major news sites are currently blocking ia_archiverbot, the web crawler commonly used by the Internet Archive for the Wayback project. The social platform Reddit is too. Other outlets are limiting the project in different ways: The Guardian does not block the crawler, but it excludes its content from the Internet Archive API and filters out articles from the Wayback Machine interface, which makes it harder for regular people to access archived versions of its articles.

20
37
submitted 2 days ago by alyaza@beehaw.org to c/technology@beehaw.org
21
19
submitted 2 days ago* (last edited 2 days ago) by alyaza@beehaw.org to c/technology@beehaw.org

For our new BBC podcast, Top Comment, we spoke to a representative of Explosive Media, one of the key accounts generating these clips. He wanted us to refer to him as Mr Explosive.

He's a savvy social media operator who initially denies working for the Iranian government. In previous interviews the outlet has said it is "totally independent". But upon further questioning, Mr Explosive admits the regime is a "customer" - something he's never before confirmed publicly.

The overriding message of these videos is that Iran is resisting what it sees as an almighty global oppressor: the United States.

The clips are garish and not subtle at all - but that hasn't put a dent in how vigorously people are sharing and commenting on them.


AI has enabled Iran and others to communicate directly with Western audiences more effectively than ever before, Briant says. They are using tools largely trained on Western data, making them ideal for creating "culturally appropriate" content.

This is what "authoritarian countries wanting to target the West have lacked in the past".

22
44

Serious question. We had a perfectly serviceable word, yet everyone decided to shift. Is it just that it's shorter to type?

If so, I feel for your colleagues trying to parse your code when all your variables use abbreviations.

23
54
24
109
submitted 4 days ago by alyaza@beehaw.org to c/technology@beehaw.org

Farmers have been fighting John Deere for years over the right to repair their equipment, and this week, they finally reached a landmark settlement.

While the agricultural manufacturing giant pointed out in a statement that this is no admission of wrongdoing, it agreed to pay $99 million into a fund for farms and individuals who participated in a class action lawsuit. Specifically, that money is available to those involved who paid John Deere’s authorized dealers for large equipment repairs from January 2018. This means that plaintiffs will recover somewhere between 26% and 53% of overcharge damages, according to one of the court documents—far beyond the typical amount, which lands between 5% and 15%.

The settlement also includes an agreement by Deere to provide “the digital tools ​required for the maintenance, diagnosis, and repair” of tractors, combines, and other machinery for 10 years. That part is crucial, as farmers previously resorted to hacking their own equipment’s software just to get it up and running again. John Deere signed a memorandum of understanding in 2023 that partially addressed those concerns, providing third parties with the technology to diagnose and repair, as long as its intellectual property was safeguarded. Monday’s settlement seems to represent a much stronger (and legally binding) step forward.

25
142

View and download this historic assembly code for your own space program

view more: next ›

Technology

42723 readers
390 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS