1
92

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about !technology@beehaw.org, community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
2
submitted 50 minutes ago* (last edited 46 minutes ago) by Powderhorn@beehaw.org to c/technology@beehaw.org

Usually, I have a spare Pixel on hand to test things out with, but the power button broke off of my 2XL.

Anyway, I've had a 10a for a couple of weeks now after my 9a was stolen, and I'm finding it really difficult to get jazzed about starting from scratch.

As such, this seems like the perfect time to say "fuck it, I'm giving GrapheneOS a try."

So, my questions are:

  • Does my eSIM remain intact on the device?

  • How do I get apps without Google Play?

  • Can I still attach a Gmail account for notifications without going all in?

  • What about contacts and message history?

  • Is there a good way to get my password keychain transferred?

  • What is the status of NFC payments?

This whole experience has been hell (I last had to set up from scratch with my 2XL, and as I'm on a 10a now, seems like that was eight years ago), but it's not even done, and I have no interest in finishing.

This said, as a homeless single guy, having a phone is important, and while I've rooted some lesser phones in the past via ADB, I have no margin for error here.

3
27

Steve Bannon and Bernie Sanders don’t agree on much. But both think that AI is a disaster for the working class. The Vermont senator recently wrote that “AI oligarchs do not want to just replace specific jobs. They want to replace workers.” Bannon, Trump’s former chief strategist, made similar comments last week: Silicon Valley does “not care about the little guy,” he said in a podcast episode titled “Stopping the AI Oligarchs From Stealing Humanity.” This emergent “Bernie-to-Bannon” coalition points to the growing bipartisan anxiety over AI. In polls, the United States ranks among the countries most concerned about AI. America is both the world’s foremost developer of AI and its chief hater.

Recently, Maine passed the country’s first statewide data-center moratorium (though the bill was vetoed by the governor). Nationally, a record number of proposed projects were canceled in the first quarter of this year following local pushback. Meanwhile, in extreme cases, concerns about AI appear to be tipping into violence. In April, someone shot 13 rounds at an Indianapolis councilman’s house and left a note under his doormat: “NO DATA CENTERS,” it read. Days later, a man threw a Molotov cocktail at Sam Altman’s home before heading to OpenAI’s headquarters, where he allegedly threatened to burn down the building and kill anyone inside. (The man has since pleaded not guilty to several charges, including attempted murder.) Social-media posts applauding the attack racked up thousands of likes: “I hope that Molotov is okay!” wrote one commenter.

All of this may be only the start. The AI industry has spent recent years warning of a jobless future. So far, narratives about labor displacement have been largely speculation. While a smattering of tech executives have attributed job cuts to AI, many analysts have accused these CEOs of “AI-washing”—essentially, using the technology as a scapegoat for roles they would have eliminated regardless. If anything, AI has mostly been a financial boon for the country, buoying the stock market and driving growth. But that could all change, of course. Imagine the uproar if jobs across the economy truly start disappearing en masse.

4
13
The Rise of the Bullshittery (xn--gckvb8fzb.com)
submitted 7 hours ago by alyaza@beehaw.org to c/technology@beehaw.org

A few weeks ago, I found myself in one of the rare situations in which I was mindlessly doom-scrolling on LinkedIn just to exclusively see one post after another that contained no actual information and not a single sentence that would have lacked any more substance if you replaced every noun in it with a different noun. There were thought leaders leading no thoughts, founders founding nothing of actual value, strategists describing strategies that amounted to “be visible” and “ship fast”, and an alarming number of self-described AI experts whose expertise appeared to consist entirely of having a ChatGPT or Claude subscription and the willingness to write about it in seventeen-paragraph posts.

There is a word for this kind of communication, one the philosopher Harry Frankfurt famously employed back in 1986, when he wrote a short essay called On Bullshit. Frankfurt’s central observation, which has aged terrifyingly well, is that the bullshitter is not the same as the liar, because the liar at least respects the truth enough to try to hide it, but the bullshitter does not care whether what they are saying is true or false. The truth-value of the statement is simply not part of their concern. The bullshitter is optimising for a different objective, usually appearing competent, appearing confident, or appearing to be the right kind of person to be in the room. And precisely because the bullshitter is indifferent to truth, Frankfurt argued, they are a greater threat to honest discourse than any liar. Twenty years on, that essay reads like a pre-mortem on the modern internet and, in parts, modern society.

5
33
submitted 18 hours ago by KayLeadfoot@fedia.io to c/technology@beehaw.org

TLDR: Since Tesla’s June 2025 robotaxi launch, Tesla has built a 39-vehicle unsupervised fleet, while Waymo has a newly disclosed 3,791-vehicle U.S. fleet. So Tesla appears to be on pace to catch up with Waymo’s autonomous fleet size by the year 2111.

LOL!

6
29

Oh, joy.

Google’s I/O conference is next week, and we expect to hear a lot about the company’s AI endeavors. The company says there’s so much to talk about that it’s spilling the Android beans a little early, and yes, a lot of AI is involved. In the coming months, Google will roll out more smartphone AI features under the Gemini Intelligence banner, bringing more automation and customization to your phone.

App automation will be a major element of Android going forward, Google says. Automation for apps is expanding after Google began testing it earlier in 2026 with DoorDash and Uber on Pixel and Samsung phones. It was a very frustrating experience at launch, but Google says it has spent the intervening months fine-tuning the system.

Google promises that Android will be able to handle more complex automations across apps. For example, the robot could find a course syllabus in Gmail and then hop to a shopping app to add the necessary books to your shopping cart. Google also suggests taking a picture of a travel brochure and telling Gemini to book something similar in the Expedia app.

I've yet to find a reason I'd want my apps talking to each other, let alone unsupervised.

But the more basic issue is that agentic "AI" isn't fit for purpose.

So, I have a follow-up appointment for how my dentures are fitting. Would it be cool to just say "appointment at 1 p.m. at this location in two weeks" and have it arrange the rideshares so I don't need to remember anything?

Sure, it would. Kinda creepy, but cool nonetheless. A couple of issues here:

  • I generally use Lyft, and outside of trips to the airport, I've learned that booking in advance is always far more expensive than the spot price (YMMV). Atop this, I usually get a beg notification offer by looking into pricing about 15 minutes before my planned need to book. Put it all together, and I can end up paying fully 75% more than I needed to. That's the last thing I want Gemini automating for me.

  • It's a fucking medical appointment. Divining when I'll need to be picked up is a fool's errand, no matter what I blocked out for it in my calendar. Unless you plan on closing down a bar, this applies similarly.

So, I'll get the guaranteed highest price with no flexibility for reality, and this is an improvement?

And you want me to use this to book larger travel plans? Highest airfare, highest hotel rate? Reservations at a Michelin restaurant because we happen to have traveled there for our anniversary, but between the outrageous airfare and usurious hotel rate, ain't no one got the money for a $120 steak.

7
54

Amazon employees are using an internal AI tool to automate non-essential tasks in a bid to show managers they are using the technology more frequently.

The Seattle-based group has started to widely deploy its in-house “MeshClaw” product in recent weeks, allowing employees to create AI agents that can connect to workplace software and carry out tasks on a user’s behalf, according to three people familiar with the matter.

Some employees said colleagues were using the software to automate additional, unnecessary AI activity to increase their consumption of tokens—units of data processed by models.

They said the move reflected pressure to adopt the technology after Amazon introduced targets for more than 80 percent of developers to use AI each week, and earlier this year began tracking AI token consumption on internal leader boards.

“There is just so much pressure to use these tools,” one Amazon employee told the FT. “Some people are just using MeshClaw to maximize their token usage.”

8
16

For years, Emrah Bayraktar did just about anything he could to make money: Cleaned cars. Worked a night-shift in a warehouse. Made sandwiches at a Subway during the day.

When he wasn't eking out a living with part-time work, 25-year-old Bayraktar from Antwerp, Belgium would take out his iPhone and edit long interviews of influencers into snippets and post them to Instagram. Pop star-turned-Arizona State University professor will.i.am teaches “The Agentic Self” at his headquarters in Los Angeles, Calif., on Jan. 14, 2026.

"And then one random night, I saw a notification saying that I earned $12, I was like OK cool," Bayraktar said. "Then two weeks later, I made two-and-a-half thousand dollars, and I thought, 'Maybe I could just quit my jobs and go all-in on this.'"

And he did, earning a cut every time someone bought something from a link he placed in the video clip.

He became so skilled at editing the short videos that he now runs a network of 40,000 freelance clippers and has a YouTube channel where he teaches people how to become clippers, directing them to websites where, instead of getting paid for so-called affiliate link purchases, clippers are paid per view they generate.

9
71

cross-posted from: https://piefed.world/c/tech/p/1101720/programbench-a-new-benchmark-by-swe-bench-creators-from-facebook-meta-to-see-if-llms-can

Comments

Benchmark.

Links

./ProgramBench

Can language models rebuild programs from scratch?

Given only a compiled binary and its documentation, agents must architect and implement a complete codebase that reproduces the original program's behavior.

In each task, the agent receives an executable and its documentation, and it must re-implement the given executable. It does not get access to any of the executable's source code, it cannot de-compile the executable, and cannot use the internet. There are 200 tasks in total covering different program complexities, ranging from small terminal utilities like jq and ripgrep to massive software projects like the PHP compiler, FFmpeg, and SQLite.

The agent must choose a language, design the architecture, write all source code, and produce a build script. Every design decision is the model's to make.

Once the agent submits a program, our test suite compares the candidate program's behavior against the original program. A candidate program passes only if all tests for that task pass.

Photo source: X/Twitter.

10
13
submitted 1 day ago by Brad@beehaw.org to c/technology@beehaw.org

The state of California data office, through the Engaged California program is looking for input of AI policy at the state level. This is an opportunity for Californians to give our input to state government and help shape California policy.

11
84
12
50
13
19
submitted 2 days ago by Templa@beehaw.org to c/technology@beehaw.org
14
23

Apparently they can assemble on site which means they're not limited in diameter which will fit a truck. So they can build higher which results in better yields.

Manufacturers video: https://www.youtube.com/watch?v=-NkTkVwJNjw

15
19
submitted 2 days ago by JRepin@lemmy.ml to c/technology@beehaw.org

cross-posted from: https://lemmy.ml/post/47169376

Over the past decade, the AI industry has come to exert an unprecedented economic, political and societal power and influence. It is therefore critical that we comprehend the extent and depth of pervasive and multifaceted capture of AI regulation by corporate actors in order to contend and challenge it. In this paper, we first develop a taxonomy of mechanisms enabling capture to provide a comprehensive understanding of the problem. Grounded in design science research (DSR) methodologies and extensive scoping review of existing literature and media reports, our taxonomy of capture consists of 27 mechanisms across five categories. We then develop an annotation template incorporating our taxonomy, and manually annotate and analyse 100 news articles. The purpose behind this analysis is twofold: validate our taxonomy and provide a novel quantification of capture mechanisms and dominant narratives. Our analysis identifies 249 instances of capture mechanisms, often co-occurring with narratives that rationalise such capture. We find that the most recurring categories of mechanisms are Discourse & Epistemic Influence, concerning narrative framing, and Elusion of law, related to violations and contentious interpretations of antitrust, privacy, copyright and labour laws. We further find that Regulation stifles innovation, Red tape and National Interest are the most frequently invoked narratives used to rationalise capture. We emphasize the extent and breadth of regulatory capture by coalescing forces -- Big AI and governments -- as something policy makers and the public ought to treat as an emergency. Finally, we put forward key lessons learned from other industries along with transferable tactics for uncovering, resisting and challenging Big AI capture as well as in envisioning counter narratives.

Full paper: PDF | HTML | TeX source

16
23
submitted 2 days ago by beep@piefed.world to c/technology@beehaw.org
17
11
submitted 2 days ago by chobeat@lemmy.ml to c/technology@beehaw.org
18
177

A grift? From Trump? How could we possibly have gotten here?

Nearly 600,000 Trump supporters paid £74 ($100) each towards a gold smartphone that, nearly a year on, does not exist.

The Trump Mobile T1 phone was announced in June 2025 by Donald Trump Jr. and Eric Trump as a patriotic alternative to Apple and Samsung, retailing at £370 ($499) and promising a 'Made in the USA' build.

An estimated 590,000 buyers paid a £74 ($100) deposit to secure one, collectively handing the venture roughly £43.7 million ($59 million). As of May 2026, not a single confirmed customer has received the device. Now, a fresh wave of anger is spreading across MAGA forums after buyers received communication making clear that their money is, for all practical purposes, gone.

19
29
submitted 3 days ago* (last edited 3 days ago) by beep@piefed.world to c/technology@beehaw.org

cross-posted from: https://piefed.world/c/tech/p/1097630/airbnb-ceo-brian-chesky-ai-now-writes-60-of-its-new-code-ai-support-bot-now-solves-aroun

AI is changing how we build and innovate. Nearly 60% of the code our engineers produce is now coauthored with AI—roughly twice the industry average, by our estimate. This isn’t just an efficiency story. It means our teams are shipping faster, iterating more quickly, and delivering more improvements to guests and hosts than we could before. We’re converting AI adoption into real product wins, and we’re just getting started.

The clearest example is customer support. For guests who contact support through our AI Assistant, over 40% of issues are now resolved without a human agent—up from roughly a third in Q4 2025—with significantly faster resolution times. We saw our cost-per-booking decrease about 10% year-over-year in Q1—a meaningful trajectory we expect to continue as we further improve AI customer support this year.

Source: Airbnb Q1 2026 Earnings Call(PDF), Page 4.

20
90

Bosses betting on AI to slash headcount and boost margins are discovering an uncomfortable truth: the strategy isn't working.

New research from Gartner lays out the problem in stark terms. The analyst firm surveyed 350 global businesses - all with annual revenues above $1 billion, all piloting or deploying intelligent automation - and found that around 80 percent had cut staff as a result.

The returns? Elusive. Companies that reduced their workforces were just as likely to see negative outcomes or marginal gains as they were to generate any meaningful return on investment (ROI).

The conclusion? Layoffs don't create returns, they just create vacancies.

"Many CEOs turn to layoffs to demonstrate quick AI returns; however, this disposition is misplaced," said distinguished VP analyst Helen Poitevin and lead researcher on the study. "Workforce reductions may create budget room, but they do not create return. Organizations that improve ROI are not those that eliminate the need for people, but those that amplify them," she added.

21
79

Motherboard sales are collapsing amid unprecedented shortages fueled by AI, causing prices for many major PC components to rise across the board during the past six months, with memory modules and storage drives leading the way.

Those shortages are being exacerbated by chipmakers like Nvidia, Intel, and AMD, which have reduced production of consumer chips so they can manufacture more AI processors. The AI infrastructure buildout is also causing shortages for Intel and AMD CPUs (and even high-end Macs), as interest in agentic AI rockets through the roof.

Because of this, users who lack deep pockets are putting off upgrading their PCs and holding on to their current devices longer. Motherboard manufacturers have begun to feel the effects of these delayed purchases, with Digitimes [machine translated] reporting that the four major firms are revising target sales downward.

22
16
23
68

The cops never change. Only the tech toys do.

That’s the upshot of this report from the Institute for Justice, which has been tracking what cops have been tracking now that they have always-on access to massive networks of security cameras, including Flock Safety’s controversial offerings, which also include automatic license plate readers (ALPR).

The proliferation of police surveillance has led to repeated abuse. One shockingly common form: police officers using ALPR camera networks to keep tabs on their romantic interests, including current partners, exes, and even strangers who unwittingly caught their eye in public.

An Institute for Justice review of media reports has identified at least 14 cases nationwide of officers allegedly abusing ALPR data this way, with the bulk of those incidents happening since 2024.

24
27

Last fall, we featured an extensive interview with Petter Törnberg of the University of Amsterdam, who studies the underlying mechanisms of social media that give rise to its worst aspects: the partisan echo chambers, the concentration of influence among a small group of elite users (attention inequality), and the amplification of the most extreme divisive voices. He wasn’t optimistic about social media’s future.

Törnberg’s research showed that, while numerous platform-level intervention strategies have been proposed to combat these issues, none are likely to be effective. And it’s not the fault of much-hated algorithms, non-chronological feeds, or our human proclivity for seeking out negativity. Rather, the dynamics that give rise to all those negative outcomes are structurally embedded in the very architecture of social media. So we’re probably doomed to endless toxic feedback loops unless someone hits upon a brilliant fundamental redesign that manages to change those dynamics.

Törnberg has been very busy since then, producing two new papers and one new preprint building on this realization that social media is structured quite differently than the physical world, with unexpected downstream consequences. The first new paper, published in PLoS ONE, specifically focused on the echo chamber effect, using the same combined standard agent-based modeling with large language models (LLMs)—essentially creating little AI personas to simulate online social media behavior.

25
21
view more: next ›

Technology

42930 readers
178 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS