1
92

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about !technology@beehaw.org, community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
4

I didn’t know about “mukbangs”—a portmanteau of the Korean words “meongneun” (eating) and “bangsong” (broadcast)—until Enid Frances appeared on my TikTok account’s explore page. Enid, wearing a pink tank top and bright lipstick, taps on the plastic packaging of a chocolate cake. She’s silent, making intense eye contact with us. She lifts the lid. She gulps milk. Her fork enters. The chewing is exaggerated and loud. Two minutes pass, and a quarter of the cake is consumed. Enid flashes a thumbs up.

Videos like Enid’s, seemingly innocuous by nature, have managed to incite international controversy. In 2024, sensationalistic reports surfaced about 24-year-old Pan Xiaoting, who supposedly ate herself to death on air. She promised followers she would consume 10kg of food in one sitting, and was said to have eaten until her stomach tore. Though there was skepticism about the veracity of Xiaoting’s story, additional influencers have since allegedly perished.

China now bans the filming and streaming of mukbangs. The Philippines proposed a similar ban. Meanwhile, in South Korea, a crackdown on public health guidelines to address mukbangs are raising questions about government infringement on freedom of choice. For American viewers, the mukbang is an unregulated and relatively new form of entertainment.

How is this a thing? I rarely enjoy watching people eat across the table.

3
9

AI companies are beginning to entertain the possibility that they could cease to exist. This notion was, until recently, more theoretical: A couple of years ago, an ex-OpenAI employee named Leopold Aschenbrenner wrote a lengthy memo speculating that the U.S. government might soon take control of the industry. By 2026 or 2027, Aschenbrenner wrote, an “obvious question” will be circling through the Pentagon and Congress: Do we need a government-led program for artificial general intelligence—an AGI Manhattan Project? He predicted that Washington would decide to go all in on such an effort.

Aschenbrenner may have been prescient. Earlier this year, at the height of the Pentagon’s ugly contract dispute with Anthropic, Secretary of Defense Pete Hegseth warned that he could invoke the Defense Production Act (DPA), a Cold War–era law that he reportedly suggested would allow him to force the AI company to hand over its technology on whatever terms the Pentagon desired. The act is one of numerous levers the Trump administration can pull to direct, or even commandeer, AI companies. And the companies have been giving the administration plenty of reason to consider doing so.

Future bots could help design and carry out biological, nuclear, and chemical warfare. They could be weaponized to take down power grids, monitor congressional emails, and black out major media outlets. These aren’t purely hypothetical concerns: Earlier this month, Anthropic announced it had developed a new AI model, Claude Mythos Preview, capable of orchestrating cyberattacks on the level of elite, state-sponsored hacking cells, potentially putting a private company’s cyber offense on par with that of the CIA and NSA. In an example of Mythos’s power, Anthropic researchers described how the model used a “moderately sophisticated multi-step exploit” to work around restrictions and gain broad internet access, then emailed a researcher—much to his surprise—while he was eating a sandwich in the park.

Washington is getting antsy about the power imbalance. Over the past year, multiple senators have proposed legislation that would order federal agencies to explore “potential nationalization” of AI. Murmurs of possible tactics abound—including more talk within the administration of the DPA after Anthropic’s Mythos announcement, one person with knowledge of such discussions told us. Meanwhile, Silicon Valley is watching carefully. In recent weeks, Elon Musk, OpenAI’s CEO Sam Altman, and Palantir’s CEO Alex Karp have publicly spoken about the possibility of nationalization. Lawyers who represent Silicon Valley’s biggest AI firms are paying attention.

Worth noting, later in the story it's pointed out why full nationalization is vanishingly unlikely, but more federal oversight is likely.

4
18
submitted 7 hours ago* (last edited 4 hours ago) by ComradeMiao@beehaw.org to c/technology@beehaw.org

Features: mainly time, accurate distance, pace, somewhat waterproof.

vo2, maps, etc would be nice but not necessary for me.

5
29
6
11
7
72

Microsoft has committed to improving the quality and reliability of Windows, and a step on the path to that goal is… encouraging a chunk of its US staff to leave the company.

As confirmed by The Register sources, the company has announced, via internal memo, a voluntary buyout scheme for US employees. So if you work in that region, are at the senior director level or below, and if your age plus years of employment at Microsoft comes to 70 or higher – you might be eligible to leap from the gangplank of the good ship Nadella rather than receiving a shove from HR.

There will be some exceptions, including employees with sales incentive plans, but a figure of approximately 7 percent is a guide to how big a chunk of the workforce could be eligible. That translates to just under 9,000 employees.

Buyouts always reduce quality. The most expensive employees are the ones with deep institutional knowledge. That's great for quarterly results, but little else.

8
43

If so are these programs that claim to 'poison' the training datasets effective ?

9
70
submitted 2 days ago by alyaza@beehaw.org to c/technology@beehaw.org

You're at a coffee shop. A song comes on. It's right on the tip of your tongue. You pull out your phone, tap a button, and it tells you what it is in a few seconds.

How does a phone listen to a few seconds of music through a noisy room and instantly match it against millions of songs?

Your first instinct might be that the phone is listening to the melody or recognizing the lyrics. It's neither of those. What it's actually doing is far more clever.

10
38
11
127
submitted 3 days ago by chobeat@lemmy.ml to c/technology@beehaw.org
12
25
Rediscovering the Handcart (solar.lowtechmagazine.com)
submitted 2 days ago by alyaza@beehaw.org to c/technology@beehaw.org

People still use large handcarts in so-called “developing countries”. However, they can be just as useful again in the large cities of the industrialized world, as I can testify after using one for a couple of months. Last autumn, I received an internship application from Kozimo, who studies at the Design Academy Eindhoven. In his application, Kozimo sent a video of a large handcart he made, which he was driving on the streets of Rotterdam, the Netherlands.

I have always dreamt of a handcart. I have never owned a car, and the only times I miss one are when I have to move stuff, something which has become increasingly common lately. Consequently, I proposed to Kozimo to build a handcart for me.

Now, I can no longer imagine living without it. I have used the vehicle to move houses and offices, pick up materials and objects I bought online, new or second-hand, and transport workshop and event materials (bike generators, solar panels, solar ovens, books, sound systems). I have done the same for friends. During these trips, I often took home materials, furniture, or objects that I found for free on the streets of Barcelona.

Unlike a van or a car, my handcart doesn’t need gasoline, electricity, or batteries, making it entirely independent from energy infrastructures. Neither do I need to pay taxes and insurance. The handcart is a very democratic vehicle. It allows anyone to carry a load wherever they want, while older, less affordable cars and vans are no longer allowed to enter city centers due to the installation of Low Emission Zones.

It would make a lot of sense to offer vehicles like this at community centers, where they are available for all neighbors to use when needed. Few people would need a handcart each day, and communal use would solve the parking problem. Although our handcart can also be parked vertically, it won’t fit in most apartments.

13
24

Once upon a time, they were counterculture idealists bringing power to the people. Today they’re greedy monopolists who’d sooner destroy our democracy than be reined in by government in any way—and they have to be stopped.

14
131
15
142
16
66

New gas projects linked to just 11 data center campuses around the US have the potential to create more greenhouse gases than the country of Morocco emitted in 2024. Emissions estimates from air permit documents examined by WIRED show that these natural gas projects—which are being built to power data centers to serve some of the US’s most powerful AI companies, including OpenAI, Meta, Microsoft, and xAI—have the potential to emit more than 129 million tons of greenhouse gases per year.

As tech companies race to secure massive power deals to build out hundreds of data centers across the country, these projects represent just the tip of the iceberg when it comes to the potential climate cost of the AI boom.

The infrastructure on this list of large natural gas projects reviewed by WIRED is being developed to largely bypass the grid and provide power solely for data centers, a trend known as behind-the-meter power. As data center developers face long waits for connections to traditional utilities, and amid mounting public resistance to the possibility of higher energy bills, making their own power is becoming an increasingly popular option. These projects have either been announced or are under construction, with companies already submitting air permit application materials with state agencies.

17
46

If charging speed is one of the major stumbling blocks preventing people from considering an electric vehicle, then ChargePoint’s new Express Solo DC fast charger is a step in the right direction. It has been designed to be compact and work with DC power, making it easy to install in tight spaces. Oh, and it maxes out at a hefty 600 kW.

As we saw with yesterday’s news from CATL, EV batteries are getting more and more capable by the day. Increasing power can reduce charge times, as long as the battery can take it—BYD’s new Blade battery can charge at up to 1.5 MW, and megawatt chargers are already common across China.

Once again, you can see how badly the US is lagging in EVs. Most Tesla Superchargers max out at 250 kW, Electrify America stops at 350 kW, and even the new IONNA stations top out at 400 kW per plug. So the Express Solo’s 600 kW—as powerful as a Formula E pit stop—sets a new benchmark, particularly for a standalone charger that could live in an urban gas station or convenience store parking lot.

18
42

Palantir has won a $300 million contract from the US Department of Agriculture (USDA) to support the National Farm Security Action Plan (NFSAP) and modernize how USDA delivers services to America's farmers.

The agreement aims to boost farm security and support USDA’s Farm Production and Conservation (FPAC), which facilitates crop insurance, conservation programs, farm safety net programs, lending, and disaster programs.

Palantir has pledged to provide operational software to enable the USDA to boost supply chain resilience and protect programs from fraud, abuse, and "foreign adversary influence." The government department is to gain critical visibility into risks that can affect America’s agricultural production and food supply, the US spy-tech company said in a statement.

"Palantir has pledged" is a scary phrase.

19
25
Our newsroom AI policy (arstechnica.com)

Earlier this year, we committed to publishing a reader-facing explanation of how Ars Technica uses, and doesn’t use, generative AI. Translating our internal policy into a reader-facing document that meets our standards for clarity and precision took longer than I’d have liked, but I wanted to get it right rather than get it out fast. That document is now live, and you can find it below (and also linked in the footer of most pages on the site).

Our approach comes from two convictions: that AI cannot replace human insight, creativity, and ingenuity, and that these tools, used well, can help professionals do better work. From those starting points, it was always clear what we wouldn’t allow. AI would not become the author, the illustrator, or the videographer. These tools are best used by professionals in the service of their profession, not as a clever end run around it, and certainly not as a path to eventually replacing it.

The short version: Ars Technica is written by humans. Our reporting, analysis, and commentary are human-authored. Where we use AI tools in our workflow, we use them with standards and oversight, and humans make every editorial decision. Our policy covers how we handle text, research, source attribution, images, audio, and video.

These standards aren’t new. They’ve governed our editorial work since AI tooling became available. What’s new is making them visible to you. You deserve to see the rules we hold ourselves to, not just trust that they exist.

As promised after the Benj Edwards fiasco. Nice to see them following through.

20
31
21
110
22
16

I heard that they require plaintext data to work. What are the other factors to this?

23
127
24
7

This is the most well-articulated discussion I’ve seen so far, of the ethical questions of generative AI.

25
30

Oh, the irony ...

Meta will begin tracking the mouse movements, clicks, and keystrokes of its US employees to generate high-quality training data for future AI agents, Reuters reports.

The news organization cites internal memos posted by the Meta Superintelligence Labs team in reporting on the new Model Capability Initiative employee-tracking software. That software will operate on specific work-related apps and websites and also make use of periodic screenshots to provide context for the AI training, according to the memo.

“This is where all Meta employees can help our models get better simply by doing their daily work,” the memo reads, in part, Reuters reports.

Meta spokesperson Andy Stone told Reuters that the collected training data will help Meta’s AI agents with tasks that it sometimes struggles with, including “things like mouse movements, clicking buttons, and navigating dropdown menus.”

“If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how we actually use them,” Stone said, adding that the collected data would not be used to evaluate employees.

Sure it won't. I find it hard to believe this isn't just admitting to systems that were already in place.

view more: next ›

Technology

42807 readers
191 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS