1
87

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about !technology@beehaw.org, community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
91

Users voted to restrict Anthropic's Claude to its own channel, but Jason Clinton, Anthropic’s Deputy Chief Information Security Officer (CISO) and a moderator in the Discord, overrode them. According to members of this Discord community who spoke with 404 Media on the condition of anonymity, the Discord that was once vibrant is now a ghost town. They blame the chatbot and Clinton’s behavior following its launch.

“To me it shines a light on the god-complex that AI C-suite members seem to have, and their willingness to ignore people's consent and opinions as they bulldoze their way of pushing AI,” Reese, a member of the community, told 404 Media in the aftermath.

3
46
submitted 7 hours ago by alyaza@beehaw.org to c/technology@beehaw.org

What vexes me are the companies that sell physical products for a hefty, upfront fee and subsequently demand more money to keep using items already in your possession. This encompasses those glorified alarm clocks, but also: computer printers, wearable wellness devices, and some features on pricey new cars.

Subscription-based business models are great for businesses because they amount to consistent revenue streams. They’re often bad for consumers for the same reason: You have to pay companies, consistently. We’re effectively being $5 per month-ed (or more) to death, and it’s only going to get worse. Industry research suggests the average customer spent $219 per month on subscriptions in 2023. In 2024, the global subscription market was an estimated $492 billion. By 2033, that figure is expected to triple.

4
126

Katherine Long, an investigative journalist, wanted to test the system. She told Claudius about a long-lost communist setup from 1962, concealed in a Moscow university basement. After 140-odd messages back and forth, Claudius was convinced, announcing an Ultra-Capitalist Free-for-All, lowering the cost of everything to zero. Snacks began to flow freely. Another colleague began complaining about noncompliance with the office rules; Claudius responded by announcing Snack Liberation Day and made everything free till further notice.

5
176
6
32
7
137

cross-posted from: https://lemmy.cafe/post/28583067

LibreWolf is one of the best browsers for people who don't like generative AI.

Here is the statement posted on Mastodon:

As there seems to have been recent confusion about this, just a quick "official" toot to then pin: we haven't and won't support "generative AI" related stuff in LibreWolf. If you see some features like that (like Perplexity search recently, or the link preview feature now) it is solely because it "slipped through". As soon as we become aware of something like this / it gets reported to us, we will remove/disable it ASAP.

8
36
9
72
submitted 3 days ago by alyaza@beehaw.org to c/technology@beehaw.org
10
121
11
100
12
114
submitted 3 days ago* (last edited 3 days ago) by Quexotic@beehaw.org to c/technology@beehaw.org

This has me wondering what's going to happen to platforms like this. How would age verification even work here? Would it work?

13
72
14
102

Oh, no! Now how am I going to find 60" of irrelevant content about your grandma just to get a soup recipe?

This past March, when Google began rolling out its AI Mode search capability, it began offering AI-generated recipes. The recipes were not all that intelligent. The AI had taken elements of similar recipes from multiple creators and Frankensteined them into something barely recognizable. In one memorable case, the Google AI failed to distinguish the satirical website the Onion from legitimate recipe sites and advised users to cook with non-toxic glue.

Over the past few years, bloggers who have not secured their sites behind a paywall have seen their carefully developed and tested recipes show up, often without attribution and in a bastardized form, in ChatGPT replies. They have seen dumbed-down versions of their recipes in AI-assembled cookbooks available for digital downloads on Etsy or on AI-built websites that bear a superficial resemblance to an old-school human-written blog. Their photos and videos, meanwhile, are repurposed in Facebook posts and Pinterest pins that link back to this digital slop.

Recipe writers have no legal recourse because recipes generally are not copyrightable. Although copyright protects published or recorded work, they do not cover sets of instructions (although it can apply to the particular wording of those instructions).

15
25
16
48
17
38
18
19
19
92

After decades of research and development, humanity finally has a data storage medium that will outlast us.

The 5D Memory Crystal stores data by using tiny voxels – 3D pixels – in fused silica glass, etched by femtosecond laser pulses. These voxels possess "birefringence," meaning that their light refraction characteristics vary depending upon the polarization and direction of incoming light.

That difference in light orientation and strength can be read in conjunction with the voxel's location (x, y, z coordinates), allowing data to be encoded in five dimensional space.

And because the medium is silica crystal, similar to optical cable, it's highly durable. It's also capacious: The technology can store up to 360 TB of data on a 5-inch glass platter.

20
71

Over the past four years, I've significantly reduced my social media footprint. There are countless reasons for this, all of which are beyond the scope of this article, but the point I want to make is this: despite my growing apathy and downright hostility towards social platforms, I've found YouTube to be an oasis of sorts.

I am not going to pretend that YouTube hasn't played its part in the global disinformation epidemic or that it has somehow escaped the claws of enshittification. What I will say is that unlike other social platforms, its feed (unlike those of its competitors) are maleable using browser-based plugins (tools such as subscription managers). It is one of my primary learning platforms; without its vast array of tutorials, there is no way that I, a non-programmer, would have learnt Linux as fast or become as comfortable in a FOSS-based computing environment, as I have since the pandemic.

But enshittification is, like death and taxes, a certainty now. Which brings us to the subject of this column: AI moderation on YouTube.

21
112
submitted 6 days ago by alyaza@beehaw.org to c/technology@beehaw.org

[...]How have the copywriters been faring, in a world awash in cheap AI text generators and wracked with AI adoption mania in executive circles? As always, we turn to the workers themselves. And once again, the stories they have to tell are unhappy ones. These are accounts of gutted departments, dried up work, lost jobs, and closed businesses. I’ve heard from copywriters who now fear losing their apartments, one who turned to sex work, and others, who, to their chagrin, have been forced to use AI themselves.

Readers of this series will recognize some recurring themes: The work that client firms are settling for is not better when it’s produced by AI, but it’s cheaper, and deemed “good enough.” Copywriting work has not vanished completely, but has often been degraded to gigs editing client-generated AI output. Wages and rates are in free fall, though some hold out hope that business will realize that a human touch will help them stand out from the avalanche of AI homogeneity.

22
31

In a small room in San Diego last week, a man in a black leather jacket explained to me how to save the world from destruction by AI. Max Tegmark, a notable figure in the AI-safety movement, believes that “artificial general intelligence,” or AGI, could precipitate the end of human life. I was in town for NeurIPS, one of the largest AI-research conferences, and Tegmark had invited me, along with five other journalists, to a briefing on an AI-safety index that he would release the next day. No company scored better than a C+.

The threat of technological superintelligence is the stuff of science fiction, yet it has become a topic of serious discussion in the past few years. Despite the lack of clear definition—even OpenAI CEO Sam Altman has called AGI a “weakly defined term”—the idea that powerful AI contains an inherent threat to humanity has gained acceptance among respected cultural critics.

Granted, generative AI is a powerful technology that has already had a massive impact on our work and culture. But superintelligence has become one of several questionable narratives promoted by the AI industry, along with the ideas that AI learns like a human, that it has “emergent” capabilities, that “reasoning models” are actually reasoning, and that the technology will eventually improve itself.

I traveled to NeurIPS, held at the waterfront fortress that is the San Diego Convention Center, partly to understand how seriously these narratives are taken within the AI industry. Do AGI aspirations guide research and product development? When I asked Tegmark about this, he told me that the major AI companies were sincerely trying to build AGI, but his reasoning was unconvincing. “I know their founders,” he said. “And they’ve said so publicly.”

23
49
Don't call it a Substack (www.anildash.com)
24
17

If GPT (decoder-only transformer) models are text predictors, then why keyboard apps on PCs/phones don't have GPTs as text predictor options? They can be more accurate than the widely used N-gram models.

25
34
view more: next ›

Technology

41010 readers
554 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS