1
165
submitted 11 months ago* (last edited 10 months ago) by ResidualBit@beehaw.org to c/technology@beehaw.org

Regarding Beehaw defederating from lemmy.world and sh.itjust.works, this post goes into detail on the why and the philosophy behind that decision. Additionally, there is an update specific to sh.itjust.works here.

For now, let's talk about what federation is and what defederation means for members of Beehaw or the above two communities interacting with each other, as well as the broader fediverse.

Federation is not something new on the internet. Most users use federated services every day (for instance, the url used to access instances uses a federated service known as DNS, and email is another system that functions through federation.) Just like those services, you elect to use a service provider that allows you to communicate with the rest of the world. That service provider is your window to work with others.

When you federate, you mutually agree to share your content. This means that posting something to a site can be seen by another and all comments are shared. Even users from other sites can post to your site.

Now when you defederate, this results in content to be no longer shared. It didn't reverse any previous sharing or posts, it just stops the information from flowing with the selected instance. This only impacts the site's that are called out.

What this means to you is when a user within one instance (e.g. Beehaw) that's chosen to defederate with another (e.g. lemmy.world), they can no longer interact with content on another instance, and vice versa. Other instances can still see the content of both servers as though nothing has happened.

  • A user is not limited to how many instances they can join (technically at least - some instance have more stringent requirements for joining than others do)
  • A user can interact with Lemmy content without being a user of any Lemmy instance - e.g. Mastodon (UI for doing so is limited, but it is still possible.)

Considering the above, it is important to understand just how much autonomy we, as users have. For example, as the larger instances are flooded with users and their respective admins and mods try to keep up, many, smaller instances not only thrive, but emerge, regularly (and even single user instances - I have one for just myself!) The act of defederation does not serve to lock individual users out of anything as there are multiple avenues to constantly maintain access to, if you want it, the entirety of the unfiltered fediverse.

On that last point, another consideration at the individual level is - what do you want out of Lemmy? Do you want to find and connect with like-minded people, share information, and connect at a social and community level? Do you want to casually browse content and not really interact with anyone? These questions and the questions that they lead to are critical. There is no direct benefit to being on the biggest instance. In fact, as we all deal with this mass influx, figure out what that means for our own instances and interactions with others, I would argue that a smaller instance is actually much better suited for those who just want to casually browse everything.

Lastly, and tangential, another concern I have seen related to this conversation is people feeling afraid of being locked out of the content and conversation from the "main" communities around big topics starting to form across the Lemmiverse (think memes, gaming, tech, politics, news, etc.) Over time, certain communities will certainly become a default for some people just given the community size (there will always be a biggest or most active - it's just a numbers game.) This, again though, all comes down to personal preference and what each individual is looking to get from their Lemmy experience. While there may, eventually, be a “main” sub for (again, by the numbers), there will also always be quite a few other options for targeted discussions on , within different communities, on different instances, each with their own culture and vibe. This can certainly feel overwhelming and daunting (and at the moment, honestly it is.) Reddit and other non-federated platforms provided the illusion of choice, but this is what actual choice looks and feels like.

[edit: grammar and spelling]

2
540
submitted 11 months ago* (last edited 11 months ago) by RedPander@lemmy.rogers-net.com to c/technology@beehaw.org

Hopefully I'm posting this in the right place, but I see Reddit developments as Tech news right now.

Wanted to share a website that is tracking Subreddits that have/will be going dark. It even has a sound notification for when they change their status.

Edit: Adding the stream https://www.twitch.tv/reddark_247

Double Edit: Data visualization https://blackout.photon-reddit.com/

3
20
submitted 3 hours ago by funn@lemy.lol to c/technology@beehaw.org

cross-posted from: https://lemy.lol/post/25062075

4
25
submitted 5 hours ago* (last edited 5 hours ago) by hedge@beehaw.org to c/technology@beehaw.org

I was under the impression that Privacy Badger wasn't considered useful any more . . . ? They should've just recommended using Firefox instead, yes?

EDIT: They spoke to, but IMHO, did not give enough time to, Cory Doctorow and Brewster Kahle. They mentioned Mastodon 👍, and described the Fediverse while not actually calling it that! A bit frustrating.

5
32
6
16
submitted 6 hours ago by hedge@beehaw.org to c/technology@beehaw.org

I've never completely understood this, but I think the answer would probably be "no," although I'm not sure. Usually when I leave the house I turn off wifi and just use mobile data (this is a habit from my pre-VPN days), although I guess I should probably just keep it on since using strange Wi-Fi with a VPN is ok (unless someone at Starbucks is using the evil twin router trick . . . ?). I was generally under the impression that mobile data is harder to interfere with than Wi-Fi, but I could well be wrong and my notions out of date. So, if need be, please set me straight. 🙂

7
73

Archive.org link

Some highlights I found interesting:

After Tinucci had cut between 15% and 20% of staffers two weeks earlier, part of much wider layoffs, they believed Musk would affirm plans for a massive charging-network expansion.

Musk, the employees said, was not pleased with Tinucci’s presentation and wanted more layoffs. When she balked, saying deeper cuts would undermine charging-business fundamentals, he responded by firing her and her entire 500-member team.

The departures have upended a network widely viewed as a signature Tesla achievement and a key driver of its EV sales.

Despite the mass firings, Musk has since posted on social media promising to continue expanding the network. But three former charging-team employees told Reuters they have been fielding calls from vendors, contractors and electric utilities, some of which had spent millions of dollars on equipment and infrastructure to help build out Tesla’s network.

Tesla's energy team, which sells solar and battery-storage products for homes and businesses, was tasked with taking over Superchargers and calling some partners to close out ongoing charger-construction projects, said three of the former Tesla employees.

Tinucci was one of few high-ranking female Tesla executives. She recently started reporting directly to Musk, following the departure of battery-and-energy chief Drew Baglino, according to four former Supercharger-team staffers. They said Baglino had historically overseen the charging department without much involvement from Musk.

Two former Supercharger staffers called the $500 million expansion budget a significant reduction from what the team had planned for 2024 - but nonetheless a challenge requiring hundreds of employees.

Three of the former employees called the firings a major setback to U.S. charging expansion because of the relationships Tesla employees had built with suppliers and electric utilities.

8
87
submitted 21 hours ago by alyaza@beehaw.org to c/technology@beehaw.org
9
123
10
153

I hate to go as cliche as "surprising absolutely no one," but really, this is not a surprise.

11
23
submitted 1 day ago by 0x815@feddit.de to c/technology@beehaw.org

The authors introduce and evaluate an open-source software package and methodological framework for detecting and analysing coordinated behaviour on social media, namely the Coordination Network Toolkit, utilising weighted, directed multigraphs to capture intricate coordination dynamics.

To whom it may concern.

12
27

This is the alternative Invidious link for the embedded article.

By Mayank Kejriwal, Research Assistant Professor of Industrial & Systems Engineering, University of Southern California.

Many people understand the concept of bias at some intuitive level. In society, and in artificial intelligence systems, racial and gender biases are well documented.

If society could somehow remove bias, would all problems go away? The late Nobel laureate Daniel Kahneman, who was a key figure in the field of behavioral economics, argued in his last book that bias is just one side of the coin. Errors in judgments can be attributed to two sources: bias and noise.

Bias and noise both play important roles in fields such as law, medicine and financial forecasting, where human judgments are central. In our work as computer and information scientists, my colleagues and I have found that noise also plays a role in AI.

Noise in this context means variation in how people make judgments of the same problem or situation. The problem of noise is more pervasive than initially meets the eye. A seminal work, dating back all the way to the Great Depression, has found that different judges gave different sentences for similar cases.

Worryingly, sentencing in court cases can depend on things such as the temperature and whether the local football team won. Such factors, at least in part, contribute to the perception that the justice system is not just biased but also arbitrary at times.

Other examples: Insurance adjusters might give different estimates for similar claims, reflecting noise in their judgments. Noise is likely present in all manner of contests, ranging from wine tastings to local beauty pageants to college admissions.

Noise in the data

On the surface, it doesn’t seem likely that noise could affect the performance of AI systems. After all, machines aren’t affected by weather or football teams, so why would they make judgments that vary with circumstance? On the other hand, researchers know that bias affects AI, because it is reflected in the data that the AI is trained on.

For the new spate of AI models like ChatGPT, the gold standard is human performance on general intelligence problems such as common sense. ChatGPT and its peers are measured against human-labeled commonsense datasets.

Put simply, researchers and developers can ask the machine a commonsense question and compare it with human answers: “If I place a heavy rock on a paper table, will it collapse? Yes or No.” If there is high agreement between the two – in the best case, perfect agreement – the machine is approaching human-level common sense, according to the test.

So where would noise come in? The commonsense question above seems simple, and most humans would likely agree on its answer, but there are many questions where there is more disagreement or uncertainty: “Is the following sentence plausible or implausible? My dog plays volleyball.” In other words, there is potential for noise. It is not surprising that interesting commonsense questions would have some noise.

But the issue is that most AI tests don’t account for this noise in experiments. Intuitively, questions generating human answers that tend to agree with one another should be weighted higher than if the answers diverge – in other words, where there is noise. Researchers still don’t know whether or how to weigh AI’s answers in that situation, but a first step is acknowledging that the problem exists. Tracking down noise in the machine

Theory aside, the question still remains whether all of the above is hypothetical or if in real tests of common sense there is noise. The best way to prove or disprove the presence of noise is to take an existing test, remove the answers and get multiple people to independently label them, meaning provide answers. By measuring disagreement among humans, researchers can know just how much noise is in the test.

The details behind measuring this disagreement are complex, involving significant statistics and math. Besides, who is to say how common sense should be defined? How do you know the human judges are motivated enough to think through the question? These issues lie at the intersection of good experimental design and statistics. Robustness is key: One result, test or set of human labelers is unlikely to convince anyone. As a pragmatic matter, human labor is expensive. Perhaps for this reason, there haven’t been any studies of possible noise in AI tests.

To address this gap, my colleagues and I designed such a study and published our findings in Nature Scientific Reports, showing that even in the domain of common sense, noise is inevitable. Because the setting in which judgments are elicited can matter, we did two kinds of studies. One type of study involved paid workers from Amazon Mechanical Turk, while the other study involved a smaller-scale labeling exercise in two labs at the University of Southern California and the Rensselaer Polytechnic Institute.

You can think of the former as a more realistic online setting, mirroring how many AI tests are actually labeled before being released for training and evaluation. The latter is more of an extreme, guaranteeing high quality but at much smaller scales. The question we set out to answer was how inevitable is noise, and is it just a matter of quality control?

The results were sobering. In both settings, even on commonsense questions that might have been expected to elicit high – even universal – agreement, we found a nontrivial degree of noise. The noise was high enough that we inferred that between 4% and 10% of a system’s performance could be attributed to noise.

To emphasize what this means, suppose I built an AI system that achieved 85% on a test, and you built an AI system that achieved 91%. Your system would seem to be a lot better than mine. But if there is noise in the human labels that were used to score the answers, then we’re not sure anymore that the 6% improvement means much. For all we know, there may be no real improvement.

On AI leaderboards, where large language models like the one that powers ChatGPT are compared, performance differences between rival systems are far narrower, typically less than 1%. As we show in the paper, ordinary statistics do not really come to the rescue for disentangling the effects of noise from those of true performance improvements. Noise audits

What is the way forward? Returning to Kahneman’s book, he proposed the concept of a “noise audit” for quantifying and ultimately mitigating noise as much as possible. At the very least, AI researchers need to estimate what influence noise might be having.

Auditing AI systems for bias is somewhat commonplace, so we believe that the concept of a noise audit should naturally follow. We hope that this study, as well as others like it, leads to their adoption.

13
72
submitted 2 days ago* (last edited 2 days ago) by Powderhorn@beehaw.org to c/technology@beehaw.org
14
23
submitted 2 days ago by Five@slrpnk.net to c/technology@beehaw.org
15
38
16
25
17
112
18
147

Archived link.

On Jan. 6, 2021, QAnon conspiracy theorists played a significant role in inciting Donald Trump supporters to storm the Capitol building in D.C., hoping to overturn the 2020 election in favor of Trump.

Days later, Twitter suspended tens of thousands of QAnon accounts, effectively banning most users who promote the far-right conspiracy theory.

Now, a new study from Newsguard has uncovered that since Elon Musk acquired the company, QAnon has had a resurgence on X, formerly Twitter, over the past year.

QAnon grows on X

Tracking commonly used QAnon phrases like "QSentMe," "TheGreatAwakening," and "WWG1WGA" (which stands for "Where We Go One, We Go All"), Newsguard found that these QAnon-related slogans and hashtags have increased a whopping 1,283 percent on X under Musk.

From May 1, 2023 to May 1, 2024, there were 1.12 million mentions of these QAnon supporter phrases on X. This was a huge uptick from the 81,100 mentions just one year earlier from May 1, 2022 to May 1, 2023.

One of the most viral QAnon-related posts of the year, on the "Frazzledrip" conspiracy, has received more than 21.8 million views, according to the report. Most concerning, however, is that it was posted by a right-wing influencer who has specifically received support from Musk.

The Jan. 2024 tweet was posted by @dom_lucre, a user with more than 1.2 million followers who commonly posts far-right conspiracy theories. In July 2023, @dom_lucre was suspended on then-Twitter. Responding to @dom_lucre's supporters, Musk shared at the time that @dom_lucre was "suspended for posting child exploitation pictures."

Sharing child sexual abuse material or CSAM would result in a permanent ban on most platforms. However, Musk decided to personally intervene in favor of @dom_lucre and reinstated his account.

Since then, @dom_lucre has posted about how he earns thousands of dollars directly from X. The company allows him to monetize his conspiratorial posts via the platform's official creator monetization program.

Musk has also previously voiced his support for Jacob Chansely, a QAnon follower known as the "QAnon Shaman," who served prison time for his role in the Jan. 6 riot at the Capitol.

The dangers of QAnon

QAnon's adherents follow a number of far-right conspiracy theories, but broadly (and falsely) believe that former President Trump has been secretly battling against a global cabal of Satanic baby-eating traffickers, who just happen to primarily be made up of Democratic Party politicians and Hollywood elites.

Unfortunately, these beliefs have too often turned deadly. Numerous QAnon followers have been involved in killings fueled by their beliefs. In 2022, one Michigan man killed his wife before being fatally shot in a standoff with police. His daughter said her father spiraled out of control as he fell into the QAnon conspiracies. In 2021, another QAnon conspiracy theorists killed his two young children, claiming that his wife had "Serpent DNA" and his children were monsters.

Of course, QAnon never completely disappeared from social media platforms. Its followers still espoused their beliefs albeit in a more coded manner over the past few years to circumvent social media platforms' policies. Now, though, QAnon believers are once again being more open about their radical theories.

The looming November 2024 Presidential election likely plays a role in the sudden resurgence of QAnon on X, as QAnon-believing Trump supporters look to help their chosen candidate. However, Musk and X have actively welcomed these users to their social media service, eagerly providing them with a platform to spread their dangerous falsehoods.

19
32

A cyberattack on the Ascension health system operating in 19 states across the U.S. forced some of its 140 hospitals to divert ambulances, caused patients to postpone medical tests and blocked online access to patient records

A cyberattack on the Ascension health system operating in 19 states across the U.S. forced some of its 140 hospitals to divert ambulances, caused patients to postpone medical tests and blocked online access to patient records.

An Ascension spokesperson said it detected “unusual activity” Wednesday on its computer network systems. Officials refused to say whether the non-profit Catholic health system, based in St. Louis, was the victim of a ransomware attack or whether it had paid a ransom, and it did not immediately respond to an email seeking updates.

But the attack had the hallmarks of a ransomware, and Ascension said it had called in Mandiant, the Google cybersecurity unit that is a leading responder to such attacks. Earlier this year, a cyberattack on Change Healthcare disrupted care systems nationwide, and the CEO of its parent, UnitedHealth Group Inc., acknowledged in testimony to Congress that it had paid a ransom of $22 million in bitcoin.

Ascension said that both its electronic records system and the MyChart system that gives patients access to their records and allows them to communicate with their doctors were offline.

“We have determined this is a cybersecurity incident,” the national Ascension spokesperson’s statement said. “Our investigation and restoration work will take time to complete, and we do not have a timeline for completion.”

To prevent the automated spread of ransomware, hospital IT officials typically take electronic medical records and appointment-scheduling systems offline. UnitedHealth CEO Andrew Witty told congressional committees that Change Healthcare immediately disconnected from other systems to prevent the attack from spreading during its incident.

The Ascension spokesperson's latest statement, issued Thursday, said ambulances had been diverted from “several” hospitals without naming them.

In Wichita, Kansas, local news reports said the local emergency medical services started diverting all ambulance calls from its hospitals there Wednesday, though the health system's spokesperson there said Friday that the full diversion of ambulances ended Thursday afternoon.

The EMS service for Pensacola, Florida, also diverted patients from the Ascension hospital there to other hospitals, its spokesperson told the Pensacola News Journal.

And WTMJ-TV in Milwaukee reported that Ascension patients in the area said they were missing CT scans and mammograms and couldn't refill prescriptions.

Connie Smith, president of the Wisconsin Federation of Nurses and Health Professionals, is among the Ascension providers turning to paper records this week to cope. Smith, who coordinates surgeries at Ascension St. Francis Hospital in Milwaukee, said the hospital didn’t cancel any surgical procedures and continued treating emergency patients.

But she said everything has slowed down because electronic systems are built into the hospital’s daily operations. Younger providers are often unfamiliar with paper copies of essential records and it takes more time to document patient care, check the results of prior lab tests and verify information with doctors’ offices, she said.

Smith said union leaders feel staff and service cutbacks have made the situation even tougher. Hospital staff also have received little information about what led to the attack or when operations might get closer to normal, she said.

“You’re doing everything to the best of your ability but you leave feeling frustrated because you know you could have done things faster or gotten that patient home sooner if you just had some extra hands,” Smith said.

Ascension said its system expected to use “downtime” procedures “for some time” and advised patients to bring notes on their symptoms and a list of prescription numbers or prescription bottles with them to appointments.

Cybersecurity experts say ransomware attacks have increased substantially in recent years, especially in the health care sector. Increasingly, ransomware gangs steal data before activating data-scrambling malware that paralyzes networks. The threat of making stolen data public is used to extort payments. That data can also be sold online.

“We are working around the clock with internal and external advisors to investigate, contain, and restore our systems,” the Ascension spokesperson's latest statement said.

The attack against Change Healthcare earlier this year delayed insurance reimbursements and heaped stress on doctor’s offices around the country. Change Healthcare provides technology used by doctor offices and other care providers to submit and process billions of insurance claims a year.

It was unclear Friday whether the same group was responsible for both attacks.

Witty said Change Healthcare's core systems were now fully functional. But company officials have said it may take several months of analysis to identify and notify those who were affected by the attack.

They also have said they see no signs that doctor charts or full medical histories were released after the attack. Witty told senators that UnitedHealth repels an attempted intrusion every 70 seconds.

A ransomware attack in November prompted the Ardent Health Services system, operating 30 hospitals in six states, to divert patients from some of its emergency rooms to other hospitals while postponing certain elective procedures.

20
160
21
78
submitted 5 days ago by 0x815@feddit.de to c/technology@beehaw.org

Archived version

Here is the report (pdf)

Security researchers at Insikt Group identified a malign influence network, CopyCop, skillfully leveraging inauthentic media outlets in the US, UK, and France. This network is suspected to be operated from Russia and is likely aligned with the Russian government. CopyCop extensively used generative AI to plagiarize and modify content from legitimate media sources to tailor political messages with specific biases. This included content critical of Western policies and supportive of Russian perspectives on international issues like the Ukraine conflict and the Israel-Hamas tensions.

CopyCop’s operation involves a calculated use of large language models (LLMs) to plagiarize, translate, and edit content from legitimate mainstream media outlets. By employing prompt engineering techniques, the network tailors this content to resonate with specific audiences, injecting political bias that aligns with its strategic objectives. In recent weeks, alongside its AI-generated content, CopyCop has begun to gain traction by posting targeted, human-produced content that engages deeply with its audience.

The content disseminated by CopyCop spans divisive domestic issues, including perspectives on Russia’s military actions in Ukraine presented in a pro-Russian light and critical viewpoints of Israeli military operations in Gaza. It also includes narratives that influence the political landscape in the US, notably by supporting Republican candidates while disparaging House and Senate Democrats, as well as critiquing the Biden administration’s policies.

The infrastructure supporting CopyCop has strong ties to the disinformation outlet DCWeekly, managed by John Mark Dougan, a US citizen who fled to Russia in 2016. The content from CopyCop is also amplified by well-known Russian state-sponsored actors such as Doppelgänger and Portal Kombat. Also, it boosts material from other Russian influence operations like the Foundation to Battle Injustice and InfoRos, suggesting a highly coordinated effort.

This use of generative AI to create and disseminate content at scale introduces significant challenges for those tasked with safeguarding elections. The sophisticated narratives, tailored to stir specific political sentiments, make it increasingly difficult for public officials to counteract the rapid spread of these false narratives effectively.

Public-sector organizations are urged to heighten awareness around threat actors like CopyCop and the risks posed by AI-generated disinformation. Legitimate media outlets also face risks, as their content may be plagiarized and weaponized to support adversarial state narratives, potentially damaging their credibility.

22
116
23
28
Emoji history: the missing years (blog.gingerbeardman.com)
24
107
25
48
view more: next ›

Technology

37200 readers
414 users here now

Rumors, happenings, and innovations in the technology sphere. If it's technological news or discussion of technology, it probably belongs here.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS