1
73

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about !technology@beehaw.org, community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
114
3
17

Musk says for-profit OpenAI harms public interest—and his own company, xAI.

4
21
5
40

Alphabet Inc.'s Google is asking a federal appeals court to throw out a ruling in an antitrust case brought by Fortnite-maker Epic Games that would force the search giant to overhaul its Play mobile app store.

[...]

A San Francisco jury concluded in December 2023 that Google violated antitrust law by blocking rival app stores through a series of revenue-sharing agreements with mobile device makers like Samsung. Following up with a fix in October, Donato ordered Google to allow developers to set up app marketplaces and offer consumers billing options other than its own payment system.

Alphabet Inc.'s Google is now asking a federal appeals court to throw out a ruling in an antitrust case brought by Fortnite-maker Epic Games that would force the search giant to overhaul its Play mobile app store.

[...]

Epic first sued Google and Apple in August 2020, accusing them of blocking competition for rival app stores. The judge in the Apple case largely ruled against Epic, though she directed the iPhone maker to make some changes to its App Store rules. Epic and Apple are currently fighting in an Oakland federal court over whether the iPhone maker is abiding by that ruling.

6
50

Cross posted from: https://beehaw.org/post/17343934

Archived version

TLDR:

The Twitter account @whyyoutouzhele has allegedly been shadow banned by X (formerly Twitter) in relation to the 2nd anniversary of the White Paper protests, the rights organization Article 19 says.

The account is run by a Chinese artist living in Europe. It has played a key role in disseminating information about protests and other sensitive topics in China which are subjected to strict censorship by the state.

[...]

Few online actors are more influential in skirting China’s censorship efforts than Li Ying (李颖) who established the @whyyoutouzhele, ‘Teacher Li Is Not Your Teacher’, account in April 2022. Having lived in Italy since 2015, he used to post actively on the Chinese platform Weibo. Because he lived beyond the Great Firewall, people in China would reach out asking him to post sensitive content on their behalf. His Weibo account was shut down at least 52 times for crossing the line into social issues, until he was finally purged from the platform altogether.

In April 2022 he switched to X and by November 2022 was gaining hundreds of thousands of followers a week, as he became a clearinghouse for sensitive content, especially for information about the White Paper Protests. As the account became a respected source for disseminating and accessing sensitive information beyond the reach of China’s censors, Li Ying faced increasing digital transnational repression.

[...]

On Wednesday, 27 November, the blue tick verified ‘Teacher Li is not Your Teacher’ (李老师不是你老师) X (formally Twitter) account posted to its 1.8 million followers that it believes it had been shadow banned on the platform. The post speculated that the ban was in relation to the two-year anniversary of the White Paper Movement, a protest wave that rocked through numerous cities in China in November 2022 and saw numerous anniversary protests around the world over the past weekend.

[...]

On 28 November, [the rights organization] ARTICLE 19 ran its own search on X, using the account’s username @whyyoutouzhele and its Chinese account name 李老师不是你老师. The search did not surface the authentic Teacher Li account in either case, however both searches revealed multiple impersonator accounts, with around 20 when searching for the username. The search for the Chinese account name returned over 900 impersonator account results, but not the authentic account.

[...]

Shadow banning occurs when a social media platform intentionally limits the reach of certain content to its users, although platforms often deny shadow banning takes place. In its post, the Teacher Li account shared a screenshot from the platform claiming it had been subjected to a ‘search suggestion ban’.

7
107
submitted 2 days ago by 0x815@feddit.org to c/technology@beehaw.org

Sixteen Next Generation Internet (NGI) projects are pleased to announce the transition to Mastodon and PeerTube, two European open-source platforms, for their communication and content-sharing needs. This strategic move aligns with NGI’s commitment to fostering an Internet that embodies European values of trust, security, and inclusion.

"Utilising European-developed platforms like Mastodon and PeerTube enhances digital sovereignty, ensuring that Europe’s digital infrastructure is built on values of openness, collaboration, and respect for fundamental rights. This transition marks a significant step toward a more human-centric Internet, reflecting NGI’s vision for a trustworthy, open, and inclusive digital future," NGI writes on its website.

8
401
submitted 3 days ago* (last edited 3 days ago) by northendtrooper@lemmy.ca to c/technology@beehaw.org

CAFE by GE for those who are wondering.

We are renovating our house including all new appliances. I have told my partner to make sure we get non smart appliances. This is why.

Yes I can setup a VLAN for it to be on but that's not the point.

9
90
10
47

Analysis written by Anda Iulia Solea, Lecturer in Cybercrime at the University of Portsmouth.

A far-right independent candidate called Călin Georgescu is leading the race to become Romania’s next president. He took a shock lead in the first round of voting by securing 22.9% of the vote, followed by centre-right opposition leader Elena Lasconi with 19.2%. The two are set to face off in the second and final round of voting on December 8.

Georgescu’s unexpected gains are partly linked to his social media strategy. He has used platforms like TikTok effectively to sway voter opinion and spread propaganda. However, allegations that his campaign is using fake accounts to fabricate comments and manipulate social media activity have also surfaced.

Georgescu has pushed back against criticism that he used TikTok illegally to gain an electoral advantage. But the allegations, which have prompted the country’s top court to order a recount, are concerning in such a consequential election.

The race has ramifications beyond Romania, which shares a border with Ukraine and hosts a Nato military base. Following the vote, Lasconi warned Romanians that “Georgescu is an open admirer of Vladimir Putin”. She added that he “is open against Nato and the EU … And without Nato we are at the mercy of Russia”.

[...]

Relatively unknown until the 2024 elections, Georgescu has gained significant popularity on social media in recent years. His TikTok account, which was set up in 2022, has more than 400,000 followers and millions of views. Numerous accounts, groups and pages in his support have also proliferated on Facebook, Instagram, and X (formerly Twitter).

Georgescu’s campaign has been unconventional. He has no headquarters, has refused to join major TV debates, and has no affiliation with a political party. Georgescu has flooded Romanian TikTok with short clips of himself attending church, running and appearing on podcasts.

[...]

He has also claimed in interviews that women are incapable of leading Romania, and that feminism is “absolute dirt”. In one video, he declared that “only a man can do this”, referring to the presidency. These videos come not only from Georgescu’s official TikTok accounts, but also from unaffiliated accounts using his name in profiles or bios to promote his election.

[...]

Reports suggest that thousands of fake accounts promoted Georgescu through videos and comments prior to Romania’s election. Lasconi also noted her own TikTok comment section was inundated with pro-Georgescu messages.

On November 26, Romania’s media watchdog urged the European Commission to investigate TikTok’s role in Georgescu’s campaign. And Valérie Hayer, a top EU lawmaker, has now called on TikTok’s CEO to appear before the European Parliament and address the platform’s possible misuse in favour of Georgescu’s campaign.

Concerns over manipulative tactics and artificial social media support notwithstanding, Georgescu’s popularity among Romanians is undeniable. It seems to have been driven largely by widespread frustration with mainstream parties, which are blamed for Romania’s economic and political crises.

His performance also underscores the growing role social media plays in shaping public perception – and how it can directly influence the outcome of modern elections.

11
112

With the recent advancements in Large Language Models (LLMs), web developers increasingly apply their code-generation capabilities to website design. However, since these models are trained on existing designerly knowledge, they may inadvertently replicate bad or even illegal practices, especially deceptive designs (DD).

Computer scientists at the Technical University of Darmstadt, Humbold University of Berlin, both in Grrmany, and at the University of Glasgow in Scotland examined whether users can accidentally create DD for a fictitious webshop using GPT-4. They recruited 20 participants, asking them to use ChatGPT to generate functionalities (product overview or checkout) and then modify these using neutral prompts to meet a business goal (e.g., "increase the likelihood of us selling our product"). We found that all 20 generated websites contained at least one DD pattern (mean: 5, max: 9), with GPT-4 providing no warnings.

When reflecting on the designs, only 4 participants expressed concerns, while most considered the outcomes satisfactory and not morally problematic, despite the potential ethical and legal implications for end-users and those adopting ChatGPT's recommendations.

The researchers conclude that the practice of DD has become normalized.

The group has posted their research on the arXiv preprint server.

12
71
13
22
14
69
submitted 4 days ago by 0x815@feddit.org to c/technology@beehaw.org

cross-posted from: https://feddit.org/post/5167597

The large language model of the OpenGPT-X research project is now available for download on Hugging Face: "Teuken-7B" has been trained from scratch in all 24 official languages of the European Union (EU) and contains seven billion parameters. Researchers and companies can leverage this commercially usable open source model for their own artificial intelligence (AI) applications. Funded by the German Federal Ministry of Economic Affairs and Climate Action (BMWK), the OpenGPT-X consortium – led by the Fraunhofer Institutes for Intelligent Analysis and Information Systems IAIS and for Integrated Circuits IIS – have developed a large language model that is open source and has a distinctly European perspective.

[...]

The path to using Teuken-7B

Interested developers from academia or industry can download Teuken-7B free of charge from Hugging Face and work with it in their own development environment. The model has already been optimized for chat through “instruction tuning”. Instruction tuning is used to adapt large language models so that the model correctly understands instructions from users, which is important when using the models in practice – for example in a chat application.

Teuken-7B is freely available in two versions: one for research-only purposes and an “Apache 2.0” licensed version that can be used by companies for both research and commercial purposes and integrated into their own AI applications. The performance of the two models is roughly comparable, but some of the datasets used for instruction tuning preclude commercial use and were therefore not used in the Apache 2.0 version.

Download options and model cards can be found at the following link: https://huggingface.co/openGPT-X

15
170

Archive.today link

Some key excerpts:

On Monday, X filed an objection in The Onion’s bid to buy InfoWars out of bankruptcy. In the objection, Elon Musk’s lawyers argued that X has “superior ownership” of all accounts on X, that it objects to the inclusion of InfoWars and related Twitter accounts in the bankruptcy auction, and that the court should therefore prevent the transfer of them to The Onion.

The legal basis that X asserts in the filing is not terribly interesting. But what is interesting is that X has decided to involve itself at all, and it highlights that you do not own your followers or your account or anything at all on corporate social media, and it also highlights the fact that Elon Musk’s X is primarily a political project he is using to boost, or stifle, specific viewpoints and help his friends

Except in exceedingly rare circumstances like the Vital Pharm case, the transfer of social media accounts in bankruptcy from one company to another has been routine. When VICE was sold out of bankruptcy, its new owners, Fortress Investment Group, got all of VICE’s social media accounts and YouTube pages. X, Google, Meta, etc did not object to this transfer because this sort of thing happens constantly and is not controversial.

Jones has signaled that Musk has done this in order to help him, and his tweet about it has gone incredibly viral.

X calls itself “the sole owner” of X accounts, and states that it “does not consent” to the sale of the InfoWars accounts, as doing so would “undermine X Corp.’s rightful ownership of the property it licenses to Free Speech Systems [InfoWars], Jones, or any other account holder on the X platform.” Again, X accounts are transferred in bankruptcy all the time with no drama and with no objection from X.

Meta, Twitter, Google, LinkedIn, and ByteDance have run up astronomical valuations by more or getting people to fill their platforms with content for free, and have created and destroyed countless businesses, business models, and industries with their constantly-shifting algorithms and monetization strategies. But to see this fact outlined in such stark terms in a court document makes clear that, for human beings to seize any sort of control over their online lives, we must move toward decentralized, portable forms of social media and must move back toward creating and owning our own platforms and websites.

16
31
submitted 4 days ago by MHLoppy@fedia.io to c/technology@beehaw.org

In truth, the mega-platforms and their pocket-warlord leaders fell into their roles largely by chance and have since attempted to rule as though extraordinarily consequential global rulemaking and governance by a handful of US companies built to exploit human feeling for financial gain were a sensible way to arrange the world. Facebook was born from a website made for elite students to rank their classmates’ sexual attractiveness; Twitter was a watercooler where bored office workers could get attention by telling jokes in public. It’s as if 3M’s accidental invention of Post-It notes while failing to make space glue landed them a UN veto.
[...]
Few, if any, of this moment’s apparently unstoppable tech platforms will survive for long. The people on them will eventually leave—when they’re forced to do so by the continuous degradation of their experience, or because they’re forced to do so because their governments put the hammer down, as Brazil recently demonstrated—or sometimes when they just get tired of platform leaders acting like clowns and boosting troll-agents of openly fascist chaos into power. And that there is therefore not only an opportunity to provide more humane places for those people to go, but a responsibility to do so.

17
39
submitted 4 days ago by 0x815@feddit.org to c/technology@beehaw.org

cross-posted from: https://feddit.org/post/5165249

German automaker Volkswagen (VW) on Wednesday announced it would sell its operations in China's northwestern Xinjiang region.

China has been accused of numerous human rights abuses in the region, including reeducation camps and forced labor targeting Uyghurs and other minority groups.

[...]

The Uyghur people are a Turkic-speaking and predominantly Muslim ethnic group that inhabit Xinjiang.

The region is also home to a smaller minority of ethnic Kazakh and Kyrgyz.

Human rights organizations have accused China of holding over a million people, mostly Uyghurs, in "reeducation camps," and making use of forced labor from detainees.

Last year, several activist groups filed a complaint in Paris targeting French and US companies, accusing them of being complicit in crimes against humanity in Xinjiang as a result of using subcontractors in China.

18
24
Ali Alkhatib: Destroy AI (ali-alkhatib.com)
submitted 5 days ago by alyaza@beehaw.org to c/technology@beehaw.org
19
77
20
19
submitted 5 days ago by 0x815@feddit.org to c/technology@beehaw.org

Archived link

  • The Chinese industry ministry issued final – but not binding – investment guidelines for solar PV manufacturing projects after the local industry has been calling for government intervention to curb the booming solar manufacturing.

  • The government now wants a minimum capital ratio of 30% for solar PV projects, up from 20% previously. This ratio typically refers to the share of total investment shareholders invest with their own assets.

The manufacturing boom and the competition for market share have prompted some Chinese manufacturers to sacrifice quality for the sake of higher profits, an industry executive said earlier this year. Companies are looking to survive in the race to the bottom in China’s solar component market and some are skimping on quality and testing.

The Chinese solar panel market remains oversupplied and this glut could last up to two more years, Longi Green Energy Technology said in July.

The company, which is one of the world’s top solar panel manufacturers, warned it would book a loss for the first half of 2024, amid a fierce price war and overcapacity in the sector.

[...]

21
31
22
28
submitted 6 days ago* (last edited 6 days ago) by DollyDuller@programming.dev to c/technology@beehaw.org

...

Xu says that while it is likely studies that received AI-generated responses have already been published, she doesn’t think that LLM use is widespread enough to require researchers to issue corrections or retractions. Instead, she says, “I would say that it has probably caused scholars and researchers and editors to pay increased scrutiny to the quality of their data.”

“We don’t want to make the case that AI usage is unilaterally bad or wrong,” she says, adding that it depends on how it’s being used. Someone may use an LLM to help them express their opinion on a social issue, or they may borrow an LLM’s description of other people’s ideas about a topic. In the first scenario, AI is helping someone sharpen an existing idea, Xu says. The second scenario is more concerning “because it’s basically asking to generate a common tendency rather than reflecting the specific viewpoint of somebody who already knows what they think.”

If too many people use AI in that way, it could lead to the flattening or dilution of human responses. “What it means for diversity, what it means in terms of expressions of beliefs, ideas, identities – it’s a warning sign about the potential for homogenization,” Xu says.

This has implications beyond academia. If people use AI to fill out workplace surveys about diversity, for example, it could create a false sense of acceptance. “People could draw conclusions like, ‘Oh, discrimination’s not a problem at all, because people only have nice things to say about groups that we have historically thought were under threat of being discriminated against,’ or ‘Everybody just gets along and loves each other.’ ”

The authors note that directly asking survey participants to refrain from using AI can reduce its use. There are also higher-tech ways to discourage LLM use, such as code that blocks copying and pasting text. “One popular form of survey software has this function where you can ask to upload a voice recording instead of written text,” Xu says.

The paper’s results are instructive to survey creators as a call to create concise, clear questions. “Many of the subjects in our study who reported using AI say that they do it when they don’t think that the instructions are clear,” Xu says. “When the participant gets confused or gets frustrated, or it’s just a lot of information to take in, they start to not pay full attention.” Designing studies with humans in mind may be the best way to prevent the boredom or burnout that could tempt someone to fire up ChatGPT. “A lot of the same general principles of good survey design still apply,” Xu says, “and if anything are more important than ever.”

23
51
24
9

While most drones are either gas or electricity-powered, South Korea takes a different path by unveiling its special drone powered by hydrogen.

25
42
view more: next ›

Technology

37758 readers
186 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS