1
92

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about !technology@beehaw.org, community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
11
submitted 54 minutes ago by Powderhorn@beehaw.org to c/technology@beehaw.org

If charging speed is one of the major stumbling blocks preventing people from considering an electric vehicle, then ChargePoint’s new Express Solo DC fast charger is a step in the right direction. It has been designed to be compact and work with DC power, making it easy to install in tight spaces. Oh, and it maxes out at a hefty 600 kW.

As we saw with yesterday’s news from CATL, EV batteries are getting more and more capable by the day. Increasing power can reduce charge times, as long as the battery can take it—BYD’s new Blade battery can charge at up to 1.5 MW, and megawatt chargers are already common across China.

Once again, you can see how badly the US is lagging in EVs. Most Tesla Superchargers max out at 250 kW, Electrify America stops at 350 kW, and even the new IONNA stations top out at 400 kW per plug. So the Express Solo’s 600 kW—as powerful as a Formula E pit stop—sets a new benchmark, particularly for a standalone charger that could live in an urban gas station or convenience store parking lot.

3
12

New gas projects linked to just 11 data center campuses around the US have the potential to create more greenhouse gases than the country of Morocco emitted in 2024. Emissions estimates from air permit documents examined by WIRED show that these natural gas projects—which are being built to power data centers to serve some of the US’s most powerful AI companies, including OpenAI, Meta, Microsoft, and xAI—have the potential to emit more than 129 million tons of greenhouse gases per year.

As tech companies race to secure massive power deals to build out hundreds of data centers across the country, these projects represent just the tip of the iceberg when it comes to the potential climate cost of the AI boom.

The infrastructure on this list of large natural gas projects reviewed by WIRED is being developed to largely bypass the grid and provide power solely for data centers, a trend known as behind-the-meter power. As data center developers face long waits for connections to traditional utilities, and amid mounting public resistance to the possibility of higher energy bills, making their own power is becoming an increasingly popular option. These projects have either been announced or are under construction, with companies already submitting air permit application materials with state agencies.

4
9

Palantir has won a $300 million contract from the US Department of Agriculture (USDA) to support the National Farm Security Action Plan (NFSAP) and modernize how USDA delivers services to America's farmers.

The agreement aims to boost farm security and support USDA’s Farm Production and Conservation (FPAC), which facilitates crop insurance, conservation programs, farm safety net programs, lending, and disaster programs.

Palantir has pledged to provide operational software to enable the USDA to boost supply chain resilience and protect programs from fraud, abuse, and "foreign adversary influence." The government department is to gain critical visibility into risks that can affect America’s agricultural production and food supply, the US spy-tech company said in a statement.

"Palantir has pledged" is a scary phrase.

5
45
6
12
Our newsroom AI policy (arstechnica.com)

Earlier this year, we committed to publishing a reader-facing explanation of how Ars Technica uses, and doesn’t use, generative AI. Translating our internal policy into a reader-facing document that meets our standards for clarity and precision took longer than I’d have liked, but I wanted to get it right rather than get it out fast. That document is now live, and you can find it below (and also linked in the footer of most pages on the site).

Our approach comes from two convictions: that AI cannot replace human insight, creativity, and ingenuity, and that these tools, used well, can help professionals do better work. From those starting points, it was always clear what we wouldn’t allow. AI would not become the author, the illustrator, or the videographer. These tools are best used by professionals in the service of their profession, not as a clever end run around it, and certainly not as a path to eventually replacing it.

The short version: Ars Technica is written by humans. Our reporting, analysis, and commentary are human-authored. Where we use AI tools in our workflow, we use them with standards and oversight, and humans make every editorial decision. Our policy covers how we handle text, research, source attribution, images, audio, and video.

These standards aren’t new. They’ve governed our editorial work since AI tooling became available. What’s new is making them visible to you. You deserve to see the rules we hold ourselves to, not just trust that they exist.

As promised after the Benj Edwards fiasco. Nice to see them following through.

7
4

This is the most well-articulated discussion I’ve seen so far, of the ethical questions of generative AI.

8
22
9
9
submitted 9 hours ago by ryujin470@fedia.io to c/technology@beehaw.org

I heard that they require plaintext data to work. What are the other factors to this?

10
104
11
29

Oh, the irony ...

Meta will begin tracking the mouse movements, clicks, and keystrokes of its US employees to generate high-quality training data for future AI agents, Reuters reports.

The news organization cites internal memos posted by the Meta Superintelligence Labs team in reporting on the new Model Capability Initiative employee-tracking software. That software will operate on specific work-related apps and websites and also make use of periodic screenshots to provide context for the AI training, according to the memo.

“This is where all Meta employees can help our models get better simply by doing their daily work,” the memo reads, in part, Reuters reports.

Meta spokesperson Andy Stone told Reuters that the collected training data will help Meta’s AI agents with tasks that it sometimes struggles with, including “things like mouse movements, clicking buttons, and navigating dropdown menus.”

“If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how we actually use them,” Stone said, adding that the collected data would not be used to evaluate employees.

Sure it won't. I find it hard to believe this isn't just admitting to systems that were already in place.

12
126
13
5

In the day that I have had access, I gave the new model a wide range of tasks.

  • A friend asked me to make a memorial image of her recently deceased cat along with two favorite toys. It crafted an image that looked like a highly personalized sympathy card.

  • It elegantly took two photos from my wedding and made it appear as if they were in an old-style photo album with photo corners.

  • My colleagues suggested a poster for a fictional event. I decided to create a Mike Allen look-alike contest in Washington Square Park this Sunday. (Of course, it's only fictional if no one shows up.)

It also made a handy infographic making "the case against candy corn" which I used unsuccessfully to convince two colleagues that the treat, which is neither candy nor corn, is also not good.

The full pages it designs are scary good. I'd go so far as to say this is definitely a shot across the bow for design work.

If you're used to absurd lettering and poor design decisions, the output included in the story suggests otherwise.

14
83
15
73
16
73
17
58

There were clues from the start that it was too good to be true. A headhunter emailed me with a job prospect – a journalist role with “a leading US technology and markets editorial team”. The opportunity, she said, was part of a confidential expansion and hadn’t been publicly posted.

My spidey-sense was tingling, but the timing was auspicious. I was on the lookout for new work as my maternity leave was coming to an end. Initially, the email seemed legitimate. When I Googled the sender, I found a headhunter with the same name and profile picture on LinkedIn, and the message was clearly tailored to me: It referenced several roles I’d previously held and identified my specific areas of expertise. “Your focus on the real-world impacts of AI, digital culture and the gig economy aligns perfectly with an internal, high-priority mandate I’m managing,” the headhunter wrote.

I emailed back. The headhunter asked me to send over my CV, along with my salary expectations, preferred work structure (remote, hybrid, or on-site), and geographic flexibility. In return, she shared a more detailed job description. The role was, indeed, perfect for me. Too perfect – as if someone had put my CV into ChatGPT and asked it to create a job description based directly on my experience. It was located in the city in which I live and offered a hybrid working arrangement, just as I’d requested. The biggest tell: I’d been ambitious with my salary suggestion, but this was offering significantly more.

By this point I was fairly sure I was being taken for a ride, but I still couldn’t figure out the scam. I found myself trying to justify the anomalies. It’s an American company, and salaries are generally higher there, aren’t they? I asked about next steps. Then the headhunter gave me feedback. My CV undersold my leadership skills, she said; it needed refining. If I liked, she could connect me with a specialist who would make my profile more compelling. They would discuss pricing directly with me.

I fell for a "resume help" scam a couple of years ago via LinkedIn. I don't even use the site anymore.

18
74
submitted 2 days ago by alyaza@beehaw.org to c/technology@beehaw.org

Internal documents reveal that Microsoft plans to temporarily suspend individual account signups to its GitHub Copilot coding product, as it transitions from requests (single interactions with Copilot) towards token-based billing.

The documents reveal that the weekly cost of running Github Copilot has doubled since the start of the year.

Microsoft also intends to tighten the rate limits on its individual and business accounts, and to remove access to certain models for those with the cheapest subscriptions.

19
116
20
78
21
70
22
8

A ClickFix campaign targeting macOS users delivers an AppleScript-based infostealer that collects credentials and live session cookies from 14 browsers, 16 cryptocurrency wallets, and more than 200 extensions.

Netskope Threat Labs researcher Jan Michael Alcantara told The Register the team initially observed the campaign last month, and has seen similar instances as recently as last week.

ClickFix is a super popular social engineering tactic used to trick people into executing malicious commands on their own computers, usually by clicking a fake computer problem fix or CAPTCHA prompt.

While the researchers don't know who the cookie thief is, they note the malware can infect both Windows and macOS machines - Netskope previously warned about the Windows-focused attacks - by using a client-side JavaScript to filter victims by user-agent, ignoring mobile devices and directing desktop users to either a Windows or macOS-specific payload.

23
24
24
24
25
51

Apple announced on Monday that it had named a replacement for Tim Cook as CEO, with head of hardware engineering John Ternus succeeding him on 1 September. Cook will stay at the company in the role of executive chair.

“It has been the greatest privilege of my life to be the CEO of Apple and to have been trusted to lead such an extraordinary company. I love Apple with all of my being,” Cook said in a press release.

Cook, who succeeded Apple co-founder Steve Jobs, has been CEO since 2011. He has overseen the global expansion of the company and its steady series of new, updated devices, though never attained the same tech visionary status as Jobs.

Cook’s tenure as CEO has marked a lucrative period of expansion for Apple as it entrenched its products in society and sought out new markets, in particular the iPhone. Apple reported earlier this year it had its best ever quarter for iPhone sales, driven by renewed demand in China. The company’s market capitalization grew from around $350bn at the start of Cook’s time to over $4tn today.

view more: next ›

Technology

42772 readers
268 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS