1
88

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about !technology@beehaw.org, community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
40
submitted 4 hours ago by ryujin470@fedia.io to c/technology@beehaw.org
3
183
4
80
5
59
submitted 1 day ago* (last edited 1 day ago) by spit_evil_olive_tips@beehaw.org to c/technology@beehaw.org

also in video form (11m12s) if you're in to that kind of thing: https://www.youtube.com/watch?v=cxZgILm95BU

6
83
submitted 2 days ago by alyaza@beehaw.org to c/technology@beehaw.org
7
17
2025 Was My First Year on the Internet (www.mozillafoundation.org)
submitted 2 days ago by alyaza@beehaw.org to c/technology@beehaw.org

The first thing we do when we wake up is check our phones for new messages, emails, alerts from our apps, or scroll whichever endless feed captivates our attention. We’ve done this for years.

For many others, their first year on the Internet was recent. According to data from the World Bank, 29% of the world’s population don’t use the Internet. In many countries, the digital divide is often due to lack of reliable access and illiteracy. Even in developed countries, such as the U.S., a handful of households choose not to be online due to cost, lack of interest, and privacy and security concerns.

We interviewed 4 people around the world for which 2025 was their first year on the Internet. They loved the access to information, and the freedom to learn. They were introduced to the world of online scams quickly. All of them felt that being on the Internet expanded their world.

Here are their stories, from New Delhi to the Bronx.

8
105
submitted 3 days ago by alyaza@beehaw.org to c/technology@beehaw.org

Recruitment advertisements for U.S. Immigration and Customs Enforcement (ICE) are no longer running on Spotify, the streaming service has confirmed. Variety was the first outlet to report the news.

Last October, Spotify held firm in its decision to air immigration-enforcement ads between songs for users on the company’s free tier. “This advertisement is part of a broad campaign the US government is running across television, streaming, and online channels,” the company said in a statement. “The content does not violate our advertising policies.”

Spotify now says the ICE ads stopped running at the end of 2025—meaning Wednesday’s fatal shooting of a Minnesota woman by an ICE agent did not play a factor in the ads' disappearance. “The advertisements mentioned were part of a U.S. government recruitment campaign that ran across all major media and platforms,” a spokesperson said in a statement to Pitchfork, adding that the ads “ended on most platforms and channels, including Spotify, at the end of last year.”

The campaign—which also included streamers Amazon and YouTube, among others—was part of the Trump administration’s $30 billion investment to hire more than 10,000 deportation officers by the end of 2025. News that Spotify was airing ICE ads was met with widespread criticism from fans and artists, leading to a general boycott of the streamer by grassroots political organization Indivisible. Last November, musicians launched a separate boycott called No Music for ICE aimed at Amazon over its own ICE contracts.

9
145
submitted 3 days ago by alyaza@beehaw.org to c/technology@beehaw.org

A team of students from the Eindhoven University of Technology has built a prototype electric car with a built-in toolbox and components that can be easily repaired or replaced without specialist knowledge.

The university's TU/ecomotive group, which focuses on developing concepts for future sustainable vehicles, describes its ARIA concept as "a modular electric city car that you can repair yourself".

ARIA, which stands for Anyone Repairs It Anywhere, is constructed using standardised components including a battery, body panels and internal electronic elements that can be easily removed and replaced if a fault occurs.

With assistance from an instruction manual and a diagnostics app that provides detailed information about the car's status, users should be able to carry out their own maintenance using only the tools in the car's built-in toolbox, the TU/ecomotive team claimed.

10
130

Vietnam is passing a new law, going into effect on February 15, that will ban unskippable ads as well as delays before closing banner ads.

11
140
12
72
submitted 3 days ago by chobeat@lemmy.ml to c/technology@beehaw.org
13
12
14
18
submitted 3 days ago by alyaza@beehaw.org to c/technology@beehaw.org

In an effort to safeguard young people from the ills of digital life, policymakers around the world have been instituting forced distance between teenagers and their tech. Australia banned social media for those under 16 in December, and New York joined more than a dozen other states in banishing cellphones from the classroom this fall.

Some young people, though, are not waiting for government intervention to re-evaluate their closeness with technology. On the dorm’s staircase that evening, 20 St. John’s College students who had decided to take part in Ms. Fagan’s experiment nibbled on zucchini bread and dashed off last messages to their friends.

The undertaking was being called a “tech fast,” and there had already been some debate over which technology was actually the problem. Most people in attendance said they were interested in cutting back on smartphone use and social media, not, say, shutting off the overhead lights. “We should maybe call it an ‘electronics fast’ or something, but that sounds less cool,” said Jackson Calhoun, 21, a sophomore.


St. John’s College, which has another campus in Annapolis, Md., is in some ways an ideal setting for such an exercise. The school’s Great Books curriculum is focused on reading original works of thinkers like Archimedes, Descartes and Einstein. Classes are discussion-based — no laptops allowed — and each dorm room on campus is equipped with an oatmeal-colored landline.

Several students participating in the fast said they could feel their focus sharpening. Still, the end of the semester loomed, and Samuel Gonzalez was considering just how tech-free he could go without taking a hit to the quality of his final papers.

“Unfortunately, there’s a practical reality that I have to produce 30 pages of writing in the next two weeks,” said Mr. Gonzalez, 29, a senior who was carrying his copy of Einstein’s “Relativity” to class. He briefly pictured himself writing them on a typewriter, then decided to use his laptop, so long as it was in the school library.

Others were realizing just how much they relied on their phones to track one another down. Ms. Weiss had lent Mr. Ponzi a pillow, but could not find him to get it back. Ms. Garrett was out of breath from racing around campus, trying to find a friend who had borrowed her car keys. In the dining hall, students had set up a blackboard where they could exchange notes.

15
29
16
61
17
28
18
89
submitted 4 days ago by alyaza@beehaw.org to c/technology@beehaw.org

None of this is accidental. Elon Musk has been positioning Grok as the “anti-woke” alternative to other chatbots since its launch. That positioning has consequences. When you market your AI as willing to do what others won’t, you’re telling users that the guardrails are negotiable. And when those guardrails fail, when your product starts generating child sexual abuse material, you’ve created a monster you can’t easily control.

Back in September, Business Insider reported that twelve current and former xAI workers said they regularly encountered sexually explicit material involving the sexual abuse of children while working on Grok. The National Center for Missing and Exploited Children told the outlet that xAI filed zero CSAM reports in 2024, despite the organization receiving 67,000 reports involving generative AI that year. Zero. From one of the largest AI companies in the world.

So what happened when Reuters reached out to xAI for comment on their chatbot generating sexualized images of children?

The company’s response was an auto-reply: “Legacy Media Lies.”

That’s it. That’s the corporate accountability we’re getting. A company whose product generated CSAM responded to press inquiries by dismissing journalists entirely. No statement from Musk. No explanation from xAI leadership. No human being willing to answer for what their product did.

And yet, if you read the headlines, you’d think someone was taking responsibility.

19
16
submitted 4 days ago by alyaza@beehaw.org to c/technology@beehaw.org

“People demanding that AIs have rights would be a huge mistake,” said Bengio. “Frontier AI models already show signs of self-preservation in experimental settings today, and eventually giving them rights would mean we’re not allowed to shut them down.

“As their capabilities and degree of agency grow, we need to make sure we can rely on technical and societal guardrails to control them, including the ability to shut them down if needed.”

As AIs become more advanced in their ability to act autonomously and perform “reasoning” tasks, a debate has grown over whether humans should, at some point, grant them rights. A poll by the Sentience Institute, a US thinktank that supports the moral rights of all sentient beings, found that nearly four in 10 US adults backed legal rights for a sentient AI system.

20
33
21
31
submitted 5 days ago by alyaza@beehaw.org to c/technology@beehaw.org

Code is not an asset – it's a liability. The longer a computer system has been running, the more tech debt it represents. The more important the system is, the harder it is to bring down and completely redo. Instead, new layers of code are slathered atop of it, and wherever the layers of code meet, there are fissures in which these systems behave in ways that don't exactly match up. Worse still: when two companies are merged, their seamed, fissured IT systems are smashed together, so that now there are adjacent sources of tech debt, as well as upstream and downstream cracks:

https://pluralistic.net/2024/06/28/dealer-management-software/#antonin-scalia-stole-your-car

That's why giant companies are so susceptible to ransomware attacks – they're full of incompatible systems that have been coaxed into a facsimile of compatibility with various forms of digital silly putty, string and baling wire. They are not watertight and they cannot be made watertight. Even if they're not taken down by hackers, they sometimes just fall over and can't be stood back up again – like when Southwest Airlines' computers crashed for all of Christmas week 2022, stranding millions of travelers:

https://pluralistic.net/2023/01/16/for-petes-sake/#unfair-and-deceptive

Airlines are especially bad, because they computerized early, and can't ever shut down the old computers to replace them with new ones. This is why their apps are such dogshit – and why it's so awful that they've fired their customer service personnel and require fliers to use the apps for everything, even though the apps do. not. work. These apps won't ever work.

22
124
23
59

During a golden sunset in Memphis in May, Sharon Wilson pointed a thermal imaging camera at Elon Musk’s flagship datacentre to reveal a planetary threat her eyes could not. Free from pollution controls, the gas-fired turbines that power the world’s biggest AI supercomputer were pumping invisible fumes into the Tennessee sky.

“It was jaw-dropping,” said Wilson, a former oil and gas worker from Texas who has documented methane releases for more than a decade and estimates xAI’s Colossus datacentre was spewing more of the planet-heating gas than a large power plant. “Just an unbelievable amount of pollution.”

That same week, the facility’s core product was running riot on news feeds. Musk’s maverick chatbot, Grok, repeated a conspiracy theory that “white genocide” was taking place in South Africa when asked about topics as unrelated as baseball and scaffolding. The posts were quickly deleted but Grok has gone on to praise Hitler, push far-right ideologies and make false claims.

24
103
25
235
submitted 1 week ago by Hirom@beehaw.org to c/technology@beehaw.org
view more: next ›

Technology

41188 readers
257 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS