1
92

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about !technology@beehaw.org, community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
6
submitted 12 minutes ago by Powderhorn@beehaw.org to c/technology@beehaw.org

We’ve been saying this for years now, and we’re going to keep saying it until the message finally sinks in: mandatory age verification creates massive, centralized honeypots of sensitive biometric data that will inevitably be breached. Every single time. And every single time it happens, the politicians who mandated these systems and the companies that built them act shocked—shocked!—that collecting enormous databases of government IDs, facial scans, and biometric data from millions of people turns out to be a security nightmare.

Well, here we go again.

A couple weeks ago, Discord announced it would launch “teen-by-default” settings for its global audience, meaning all users would be shunted into a restricted experience unless they verified their age through biometric scanning. The internet, predictably, was not thrilled. But while many users were busy venting their frustration, a group of security researchers decided to do something more useful: they took a look under the hood at Persona, one of the companies Discord was using for verification (specifically for users in the UK).

What they found, according to The Rage, was exactly what we would predict:

Together with two other researchers, they set out to look into Persona, the San Francisco-based startup that’s used by Discord for biometric identity verification – and found a Persona frontend exposed to the open internet on a US government authorized server.

In 2,456 publicly accessible files, the code revealed the extensive surveillance Persona software performs on its users, bundled in an interface that pairs facial recognition with financial reporting – and a parallel implementation that appears designed to serve federal agencies.

Let me say that again: 2,456 publicly accessible files sitting on a government-authorized server, exposed to the open internet.

3
3
submitted 20 minutes ago by Powderhorn@beehaw.org to c/technology@beehaw.org

Ballooning memory prices are forecast to kill off entry-level PCs, leading to a decline in global shipments this year - and a similar effect is going to hit smartphones.

Analyst biz Gartner is projecting a drop in PC shipments of more than 10 percent during 2026, and a decline of around 8 percent for smartphones, all due to the AI-driven memory shortage.

Some types of memory have doubled or quadrupled in price since last year, and Gartner believes DRAM and NAND flash used in PCs and phones is set for a further 130 percent rise by the end of 2026.

The upshot of this is that the budget PC will disappear, simply because vendors won't be able to build them at a price that will satisfy cost-conscious buyers, according to Gartner research director Ranjit Atwal.

"Because the price of memory is increasing so much, vendors lose the ability to provide entry-level PCs – those below about $500," he told The Register.

4
1
submitted 30 minutes ago by Powderhorn@beehaw.org to c/technology@beehaw.org

Imagine this: You're on Reddit, Hacker News, or some forum, posting with a silly username like GamerCat2025 or SecretCoderX. You think you are anonymous, and no one knows you and so you can freely express your thoughts.

Well, a brand-new research paper just blew that idea apart. It's called "Large-scale online deanonymization with LLMs" which is a fancy way of saying "figuring out the real person behind a secret online name".

The researchers include people ETH Zurich and, Anthropic (parent company of Claude), and a research group called MATS and they proved that today's super-powerful AI chatbots can play detective and unmask people way better than ever before.

5
32
6
9

Workers grappling with the rapid growth of artificial intelligence have said they feel “devalued” by the technology and warned of a downward trajectory in the quality of work.

Recent analysis by the International Monetary Fund found AI would affect about 40% of jobs around the world. Its head, Kristalina Georgieva, has said: “This is like a tsunami hitting the labour market.”

Workers who have trained AI models to replace some or all of their roles tell the Guardian about their experiences.

7
71
8
107
9
19

What will happen if the Linux kernel starts having AI generated code in it?

10
21
Moltbook was peak AI theater (www.technologyreview.com)
11
21

SAN DIEGO, California, Feb 23 (Reuters) - Researchers at ASML Holding say they have found a way to boost the power of the light source in a key chip making machine to turn out up to 50% more chips by decade's end, to help retain the Dutch company's edge over emerging U.S. and Chinese rivals.

ASML is the world's only maker of commercial extreme ultraviolet lithography (EUV) machines, a critical tool for chipmakers such as Taiwan Semiconductor Manufacturing Co, Intel and others in producing advanced computing chips.

"It's not a parlor trick or something like this, where we demonstrate for a very short time that it can work," Michael Purvis, ASML's lead technologist for its EUV source light, said in an interview.

"It's a system that can produce 1,000 watts under all the same requirements that you could see at a customer," he added, speaking at the company's California facilities near San Diego.

12
38

Developing new catalysts requires large-scale, repetitive experiments with frequent changes to catalyst composition and reaction conditions. Manual experiments are time-consuming and error prone. A team has automated this process and significantly increased reproducibility by employing robots to manage reagent compositions and run the repeated tests.

13
62
submitted 4 days ago* (last edited 4 days ago) by Templa@beehaw.org to c/technology@beehaw.org

California’s new bill requires DOJ-approved 3D printers that report on themselves targeting general-purpose machines. Assembly Member Bauer-Kahan introduced AB-2047, the “California Firearm Printing Prevention Act,” on February 17th. The bill would ban the sale or transfer of any 3D printer in California unless it appears on a state-maintained roster of approved makes and models… certified by the Department of Justice as equipped with “firearm blocking technology.” Manufacturers would need to submit attestations for every make and model. The DOJ would publish a list. If your printer isn’t on the list by March 1, 2029, it can’t be sold. In addition, knowingly disabling or circumventing the blocking software is a misdemeanor. We’ve been tracking this pattern. Washington State’s HB 2321 requires printers to include “blocking features” that can’t be defeated by users with “significant technical skill” (good luck with that on open-source firmware). New York’s budget bill S.9005 buries similar requirements in Part C, sweeping in CNC mills and anything capable of “subtractive manufacturing.” California’s version adds a certification bureaucracy on top: state-approved algorithms, state-approved software control processes, state-approved printer models, quarterly list updates, and civil penalties up to $25,000 per violation. As Michael Weinberg wrote after the New York and Washington proposals dropped… accurately identifying gun parts from geometry alone is incredibly hard, desktop printers lack the processing power to run this kind of analysis, and the open-source firmware that runs most machines makes any blocking requirement trivially easy to bypass. The Firearms Policy Coalition flagged AB-2047 on X, and the reactions tell you everything. Jon Lareau called it “stupidity on steroids,” pointing out that a simple spring-shaped part has no way of revealing its intended use. The Foundry put it plainly: “Regulating general-purpose machines is another. AB-2047 would require 3D printers to run state-approved surveillance software and criminalize modifying your own hardware.” As we’ve said before on this blog, when we covered Washington and New York, it doesn’t matter if you’re pro or anti-gun. The state should prosecute people who make illegal thing, not add useless surveillance software on every tool in every classroom, library, and garage in the state. And as you can see, these bills spread – that’s how an small group can push legislation into the entire country. First, Washington proposed theirs, then New York, now California. Once those three states pass a law, that’s 20~25% of the country by GDP/population and thus every manufacturer is forced to comply with a bad decision in order to stay in business. If you’re a maker, educator, or manufacturer anywhere in the US, even outside these states, this is a problem-problem now.

14
32
15
203
submitted 6 days ago* (last edited 6 days ago) by remington@beehaw.org to c/technology@beehaw.org
16
49

53MB of source code leaked from a government endpoint. 269 verification checks. biometric face databases. SAR filings to FinCEN. and the same company that verifies your ChatGPT account.

17
17

Matthew Ramirez started at Western Governors University as a computer science major in 2025, drawn by the promise of a high-paying, flexible career as a programmer. But as headlines mounted about tech layoffs and AI’s potential to replace entry-level coders, he began to question whether that path would actually lead to a job.

When the 20-year-old interviewed for a datacenter technician role that June and never heard back, his doubts deepened. In December, Ramirez decided on what he thought was a safer bet: turning away from computer science entirely. He dropped his planned major to instead apply to nursing school. He comes from a family of nurses, and sees the field as more stable and harder to automate than coding.

“Even though AI might not be at the point where it will overtake all these entry-level jobs now, by the time I graduate, it likely will,” Ramirez said.

Ramirez is not alone in reshaping his career out of anxiety over AI. As students like him are reconsidering their majors over concerns that AI may disrupt their employment prospects, more established workers – some with decades of experience – are rethinking their trajectories because they’re encountering AI at work and share the same unease. Some workers are eschewing it entirely; others are embracing it.

18
12
19
36

Scientists designed color-changing carbon dot biosensors that can detect spoiled meat in sealed packages in real-time, just in case you don't trust the sniff-test.

20
73

Well, well, well ... if it isn't the consequences of my own actions.

The clock is ticking for AI projects to either prove their worth or face the chopping block.

Or so says data management and machine learning biz Dataiku, which commissioned research conducted online by the Harris Poll to get a snapshot of the views from 600 Chief information officers (CIOs) across the US, UK, France, Germany, UAE, Japan, South Korea, and Singapore.

The report, "The 7 Career-Making AI Decisions for CIOs in 2026," claims AI is facing corporate accountability in 2026 after several years of investment into research and pilot projects. CIOs are worried their careers are on the line if the tech's effectiveness falls short of expectations.

Money continues to be pumped into AI as the next great thing in business, but a growing number of studies have found that adopting AI tools hasn't helped the bottom line, and enterprises are seeing neither increased revenue nor decreased costs from their AI projects.

21
14

Some cultures used stone, others used parchment. Some even, for a time, used floppy disks. Now scientists have come up with a new way to keep archived data safe that, they say, could endure for millennia: laser-writing in glass.

From personal photos that are kept for a lifetime to business documents, medical information, data for scientific research, national records and heritage data, there is no shortage of information that needs to be preserved for very long periods of time.

But there is a problem: current long-term storage of digital media – including in datacentres that underpin the cloud – relies on magnetic tape and hard disks, both of which have limited lifespans. That means repeated cycles of copying on to new tapes and disks are required.

Now experts at Microsoft in Cambridge say they have refined a method for long-term data storage based on glass.

“It has incredible durability and incredible longevity. So once the data is safely inside the glass, it’s good for a really long time,” said Richard Black, the research director of Project Silica.

22
35
23
24
24
77

Not long after the terms “996” and “grindcore” entered the popular lexicon, people started telling me stories about what was happening at startups in San Francisco, ground zero for the artificial intelligence economy. There was the one about the founder who hadn’t taken a weekend off in more than six months. The woman who joked that she’d given up her social life to work at a prestigious AI company. Or the employees who had started taking their shoes off in the office because, well, if you were going to be there for at least 12 hours a day, six days a week, wouldn’t you rather be wearing slippers?

“If you go to a cafe on a Sunday, everyone is working,” says Sanju Lokuhitige, the co-founder of Mythril, a pre-seed-stage AI startup, who moved to San Francisco in November to be closer to the action. Lokuhitige says he works seven days a week, 12 hours a day, minus a few carefully selected social events each week where he can network with other people at startups. “Sometimes I’m coding the whole day,” he says. “I do not have work-life balance.”

Another startup employee, who came to San Francisco to work for an early-stage AI company, showed me dismal photos from his office: a two-bedroom apartment in the Dogpatch, a neighborhood popular with tech workers. His startup’s founders live and work in this apartment – from 9am until as late as 3am, breaking only to DoorDash meals or to sleep, and leaving the building only to take cigarette breaks. The employee (who asked not to use his name, since he still works for this company) described the situation as “horrendous”. “I’d heard about 996, but these guys don’t even do 996,” he says. “They’re working 16-hour days.”

I'd not heard about 996.

25
120
view more: next ›

Technology

42347 readers
173 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS