1
92

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about !technology@beehaw.org, community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
57

Its fans just don't want to let go.

3
67

Lawmakers in Congress are moving quickly on the GUARD Act, an age-gating bill restricting minors’ access to a wide range of online tools, with a key vote expected this week. The proposal is framed as a response to alarming cases involving “AI companions” and vulnerable young users. But the text of the bill goes much further, and could require age gates even for search engines that use AI.

If enacted, the GUARD Act won’t just target a narrow category of risky chatbots. It would require companies to verify the age of every user — then block anyone under 18 from interacting with a huge range of online systems. It would block minors from everyday online tools, undermine parental guidance, and force adults to sacrifice their privacy. In the process, it would require services to implement speech-restricting and privacy-invasive age-verification systems for everyone—not just kids.

Under the GUARD Act’s broad definitions, a high school student could be barred from asking homework help tools questions about algebra problems. A teenager trying to return a product could be kicked out of a standard customer-service chat.

4
68

More than half of “long-shot” bets on military action made on Polymarket are successful, according to a new report that suggests prediction markets could pose a bigger threat than previously recognized to the security of sensitive information.

Analysis by the Anti-Corruption Data Collective, a non-profit research and advocacy group, found that long-shot bets—defined as wagers of $2,500 or more at odds of 35 percent or less—on the platform had an average win rate of around 52 percent in markets on military and defense actions.

That compares with a win rate of 25 percent across all politics-focused markets and just 14 percent for all markets on the platform as a whole.

The research is likely to add to growing concerns among regulators and lawmakers about insiders placing bets on the timing and success of military actions, amid fears that this could reveal classified information in advance.

5
39

French protestors says Windows 10's ESU will only leave users in the lurch, potentially rendering millions of PCs obsolete as they upgrade to Windows 11.

6
40

Many people are hoping—nay, praying—that the potential AI bubble will burst soon.

But to hear Google tell it, generative AI is the future, and the company’s products have to change to keep up with the technical reality. As a result, Gemini is seeping into every nook and cranny of the Google ecosystem. Generative AI feeds on data, and Google has a lot of your data in products like Gmail and Drive. What does that mean for your privacy, and what happens if you don’t want Gemini peeking over your shoulder? Well, it’s kind of a mess.

The amount of data Gemini retains depends on how you access the AI, and opting out of data collection can mean running straight into so-called “dark patterns,” UI elements that work against the user’s interest.

This is the future?

7
111

It only took nine seconds for an AI coding agent gone rogue to delete a company’s entire production database and its backups, according to its founder. PocketOS, which sells software that car rental businesses rely on, descended into chaos after its databases were wiped, the company’s founder Jeremy Crane said.

The culprit was Cursor, an AI agent powered by Anthropic’s Claude Opus 4.6 model, which is one of the AI industry’s flagship models. As more industries embrace AI in an attempt to automate tasks and even replace workers, the chaos at PocketOS is a reminder of what could go wrong.

Crane said customers of PocketOS’s car rental clients were left in a lurch when they arrived to pick up vehicles from businesses that no longer had access to software that managed reservations and vehicle assignments.

8
173

Maryland has become the first state in the US to ban surveillance pricing in grocery stores.

Maryland’s law bans grocers and third-party delivery services from using a person’s personal data to set higher prices. Wes Moore, the governor, signed the measure into law on Tuesday. “At a time when technology can predict what we need, when we need it, when we’ll pay for it and also – when we’ll pay more for it, and at a time when we’re watching how big companies are then using these analytics against us to make record profits, Maryland is not just pushing back. Maryland is pushing forward because we are going to protect our people,” Moore said at the bill signing ceremony.

When engaging in surveillance pricing, stores rapidly change the cost of products based on consumer data, including their location, internet search history and demographics. That means buyers are paying different prices for the same items purchased around the same time. Critics of this method – also known as dynamic pricing – say that in doing so, businesses are effectively charging each person the most that they’re willing to pay.

9
28

I heard that it has proprietary components like the init system (launchd).

10
98
submitted 4 days ago by XLE@piefed.social to c/technology@beehaw.org

They had one job.

11
27

If you're tired of interacting with a bot that spews Nazi propaganda or refers to itself as MechaHitler, you could sign off of Elon Musk's xAI. Or, just to be sure, use an LLM whose training data ends in 1930, three years before the Nazis took power in Germany and nine years before World War II started.

A trio of AI researchers has released a 13-billion-parameter "vintage" language model they call Talkie, which has been trained solely on digital scans of English-language books, newspapers, periodicals, scientific journals, patents, and case law that were published before the end of 1930. Pre-1931 works were chosen because 1930 is the current public domain year in the United States.

In other words, if you're looking for information on World War II, the election of Franklin D. Roosevelt, Amelia Earhart's solo Atlantic flight, or how a microwave oven works, you're out of luck. Ask it about Betty Boop, flappers, the state of the US economy as the Great Depression began, or the sociological effects of the introduction of car radios, and you've come to the right place.

This isn't the first vintage AI model to appear, mind you, with others trained on Victorian literature and pre-1900 scientific texts already out in the world. It is, according to its creators, the largest they are aware of.

12
23

Japan’s famously conscientious but overburdened baggage handlers will soon be joined by extra staff at Tokyo’s Haneda airport – although their new colleagues will need to take regular recharging breaks.

Japan Airlines will introduce humanoid robots on a trial basis from the beginning of May, with a view to deploying them permanently as a solution to the country’s chronic labour shortage.

The Chinese-made humanoids will move travellers’ luggage and cargo on the tarmac at Haneda, which handles more than 60 million passengers a year.

JAL and its partner in the initiative, Japan Airlines GMO Internet Group, hope the experiment – which ends in 2028 – will lessen the burden on human employees amid a surge in inbound tourism and forecasts of more severe labour shortages.

In a demonstration for the media this week, a 130cm-tall robot manufactured by Hangzhou-based Unitree was seen tentatively “pushing” cargo on to a conveyer belt next to a JAL passenger plane and waving to an unseen colleague.

13
21

These days, the hype is all about AI and robots, but almost a decade ago, the tech du jour was self-driving. You couldn’t swing a lanyard at CES for the latter half of the last decade without hitting a robotaxi; post-COVID, the number of startups has shrunk, but the technology has definitely matured. Go to the right cities—San Francisco and Austin, Texas, spring to mind—and you might see dozens of sensor-festooned vehicles among the downtown traffic.

The pod-like robotaxis belonging to Zoox stand out. Other robotaxi developers are retrofitting existing vehicles like Hyundai Ioniq 5s with sensors and the computing power necessary for self-driving. Zoox, which was bought by Amazon in 2020, did that with its test fleet, but as it starts to offer ride-hailing services—currently in Las Vegas and San Francisco—it’s doing so with a purpose-built design that looks like it just drove off the set of a big-budget sci-fi production.

“A robotaxi is not a car; it’s not a human-driven vehicle, and the requirements are wildly different, although it has to live in that world,” explained Chris Stoffel, director of robot industrial design and studio engineering at Zoox.

It all starts with the sensors, each perched on a little ledge projecting from the top four corners of the robotaxi’s body. From up there, each has an unobstructed, high-level view, giving the Zoox robotaxi good situational awareness, particularly straight ahead. “Because we don’t have a traditional hood, we’ve optimized our frontal coverage in a way that would be nearly impossible on a retrofitted vehicle,” said Zoox director of sensor engineering Ryan McMichaels.

I've yet to see one of these in the wild, so I'm guessing they're geofencing to downtown and UT. Waymos, on the other hand ... I see multiple ones a day, and I don't get out all that much.

14
50
submitted 5 days ago by chobeat@lemmy.ml to c/technology@beehaw.org
15
41

AI companies are beginning to entertain the possibility that they could cease to exist. This notion was, until recently, more theoretical: A couple of years ago, an ex-OpenAI employee named Leopold Aschenbrenner wrote a lengthy memo speculating that the U.S. government might soon take control of the industry. By 2026 or 2027, Aschenbrenner wrote, an “obvious question” will be circling through the Pentagon and Congress: Do we need a government-led program for artificial general intelligence—an AGI Manhattan Project? He predicted that Washington would decide to go all in on such an effort.

Aschenbrenner may have been prescient. Earlier this year, at the height of the Pentagon’s ugly contract dispute with Anthropic, Secretary of Defense Pete Hegseth warned that he could invoke the Defense Production Act (DPA), a Cold War–era law that he reportedly suggested would allow him to force the AI company to hand over its technology on whatever terms the Pentagon desired. The act is one of numerous levers the Trump administration can pull to direct, or even commandeer, AI companies. And the companies have been giving the administration plenty of reason to consider doing so.

Future bots could help design and carry out biological, nuclear, and chemical warfare. They could be weaponized to take down power grids, monitor congressional emails, and black out major media outlets. These aren’t purely hypothetical concerns: Earlier this month, Anthropic announced it had developed a new AI model, Claude Mythos Preview, capable of orchestrating cyberattacks on the level of elite, state-sponsored hacking cells, potentially putting a private company’s cyber offense on par with that of the CIA and NSA. In an example of Mythos’s power, Anthropic researchers described how the model used a “moderately sophisticated multi-step exploit” to work around restrictions and gain broad internet access, then emailed a researcher—much to his surprise—while he was eating a sandwich in the park.

Washington is getting antsy about the power imbalance. Over the past year, multiple senators have proposed legislation that would order federal agencies to explore “potential nationalization” of AI. Murmurs of possible tactics abound—including more talk within the administration of the DPA after Anthropic’s Mythos announcement, one person with knowledge of such discussions told us. Meanwhile, Silicon Valley is watching carefully. In recent weeks, Elon Musk, OpenAI’s CEO Sam Altman, and Palantir’s CEO Alex Karp have publicly spoken about the possibility of nationalization. Lawyers who represent Silicon Valley’s biggest AI firms are paying attention.

Worth noting, later in the story it's pointed out why full nationalization is vanishingly unlikely, but more federal oversight is likely.

16
29

A trusted analyst has claimed that OpenAI is working with two chip giants to create custom processors for an OpenAI phone.

17
37
18
20
submitted 6 days ago* (last edited 5 days ago) by ComradeMiao@beehaw.org to c/technology@beehaw.org

Features: mainly time, accurate distance, pace, somewhat waterproof.

vo2, maps, etc would be nice but not necessary for me.

19
10

I didn’t know about “mukbangs”—a portmanteau of the Korean words “meongneun” (eating) and “bangsong” (broadcast)—until Enid Frances appeared on my TikTok account’s explore page. Enid, wearing a pink tank top and bright lipstick, taps on the plastic packaging of a chocolate cake. She’s silent, making intense eye contact with us. She lifts the lid. She gulps milk. Her fork enters. The chewing is exaggerated and loud. Two minutes pass, and a quarter of the cake is consumed. Enid flashes a thumbs up.

Videos like Enid’s, seemingly innocuous by nature, have managed to incite international controversy. In 2024, sensationalistic reports surfaced about 24-year-old Pan Xiaoting, who supposedly ate herself to death on air. She promised followers she would consume 10kg of food in one sitting, and was said to have eaten until her stomach tore. Though there was skepticism about the veracity of Xiaoting’s story, additional influencers have since allegedly perished.

China now bans the filming and streaming of mukbangs. The Philippines proposed a similar ban. Meanwhile, in South Korea, a crackdown on public health guidelines to address mukbangs are raising questions about government infringement on freedom of choice. For American viewers, the mukbang is an unregulated and relatively new form of entertainment.

How is this a thing? I rarely enjoy watching people eat across the table.

20
12
21
44

If so are these programs that claim to 'poison' the training datasets effective ?

22
74

Microsoft has committed to improving the quality and reliability of Windows, and a step on the path to that goal is… encouraging a chunk of its US staff to leave the company.

As confirmed by The Register sources, the company has announced, via internal memo, a voluntary buyout scheme for US employees. So if you work in that region, are at the senior director level or below, and if your age plus years of employment at Microsoft comes to 70 or higher – you might be eligible to leap from the gangplank of the good ship Nadella rather than receiving a shove from HR.

There will be some exceptions, including employees with sales incentive plans, but a figure of approximately 7 percent is a guide to how big a chunk of the workforce could be eligible. That translates to just under 9,000 employees.

Buyouts always reduce quality. The most expensive employees are the ones with deep institutional knowledge. That's great for quarterly results, but little else.

23
38
24
26

Once upon a time, they were counterculture idealists bringing power to the people. Today they’re greedy monopolists who’d sooner destroy our democracy than be reined in by government in any way—and they have to be stopped.

25
28
Rediscovering the Handcart (solar.lowtechmagazine.com)
submitted 1 week ago by alyaza@beehaw.org to c/technology@beehaw.org

People still use large handcarts in so-called “developing countries”. However, they can be just as useful again in the large cities of the industrialized world, as I can testify after using one for a couple of months. Last autumn, I received an internship application from Kozimo, who studies at the Design Academy Eindhoven. In his application, Kozimo sent a video of a large handcart he made, which he was driving on the streets of Rotterdam, the Netherlands.

I have always dreamt of a handcart. I have never owned a car, and the only times I miss one are when I have to move stuff, something which has become increasingly common lately. Consequently, I proposed to Kozimo to build a handcart for me.

Now, I can no longer imagine living without it. I have used the vehicle to move houses and offices, pick up materials and objects I bought online, new or second-hand, and transport workshop and event materials (bike generators, solar panels, solar ovens, books, sound systems). I have done the same for friends. During these trips, I often took home materials, furniture, or objects that I found for free on the streets of Barcelona.

Unlike a van or a car, my handcart doesn’t need gasoline, electricity, or batteries, making it entirely independent from energy infrastructures. Neither do I need to pay taxes and insurance. The handcart is a very democratic vehicle. It allows anyone to carry a load wherever they want, while older, less affordable cars and vans are no longer allowed to enter city centers due to the installation of Low Emission Zones.

It would make a lot of sense to offer vehicles like this at community centers, where they are available for all neighbors to use when needed. Few people would need a handcart each day, and communal use would solve the parking problem. Although our handcart can also be parked vertically, it won’t fit in most apartments.

view more: next ›

Technology

42852 readers
27 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS