1
1

Copious access points are deployed by naĂŻve admins who are oblivious to the fact that not everyone runs the latest gear. The shitty practice of pushing wi-fi in an arbitrarily exclusive way needs pushback. The first step is exposure. We need to enumerate the various ways demographics of people are being excluded and collect a DB on it.

The wi-fi protocol is the first point of failure. E.g. 802.11b vs 802.11a/g/n.. All new hardware is backwards compatible with older protocols. When an 802.11b device cannot see a signal, it’s because some asshat proactively disabled 802.11b.

Most exclusivity occurs with shitty captive portals. There are countless ways to fuckup a website to make it exclusive. E.g.

  • to impose SSL, which inherently imposes recent certs and CAs that exclude old devices. It’s essentially rock stupid when the captive portal is nothing more than a button that says “I accept the ToS”.
  • to impose JavaScript, which encapsulates a whole industry of poorly trained people who have no concept of stability of standards and interoperability.
  • to impose SMS confirmation, which makes the ignorant assumption that every single user has a mobile phone, that they carry it with them, and that they are willing to share their number willy nilly.

đŸŒ±environmental impact🚼

The brain dead practice of deploying public Internet access using needlessly exclusive tech is a form of forced obsolscence. It’s one of the factors that pushes people to throw away working devices in order to overcome these ecocidal Internet access deployments.

🔧the fixđŸ’Ÿ

An app that records SSIDs, their location, and all the detectable exclusivity characteristics. It should also take human input with notes to record exclusivity that is not auto-detectable. Ideally the local DB would sync with a central DB. It should also be possible to extract a GPX file for a given region which could then be imported into OSMand or Organic Maps.

2
1

We are drowning in enshitified websites. Cloudflare automatically enshitifies ⅓ of the worlds websites. On top of that, there are countless shitty anti-human features that plague the web. Some of them just annoy, and some actually make the website unreachable or unusable to various demographics of people (such as Tor users).

Most infuriating is when a GOVERNMENT website intended to serve the public uses access restrictions (like Cloudflare) or does something else to exclude demographics of people who are entitled access. The Tor community can no longer access most websites of the EU.

What we need

We need an app that will:

  • attempt to visit a webpage from multiple different networks (VPN, Tor, residential clearnet, and a variety of different geographic regions).
  • try a variety of different user agent strings (cURL, wget, firefox, lynx).
  • compare the content between non-erroneous payloads. A significant difference should raise flags. If there is much less content, it could perhaps be regarded as an access denial without error. (e.g. a page simply says: “we don’t serve .. (your kind of people)”). Some common phrases could be searched for.
  • detect exclusive walled gardens like Cloudflare and Sucuri
  • accessibilityÂč/enshitification check: whether the contact page imposes a GUI or Google CAPTCHA (Âčin terms of people with impairments)
  • open data check: whether the contact page discloses a street address or phone number.
  • check whether the page functions with uMatrix (maybe this is not possible).
  • check whether a privacy policy exists.
  • check whether there is a popup blocker blocker (that blocks those who block popups/ads).

In the end, the app produces a checklist and concludes with a final result:

  • ✔👍🎉 ❝The website under test is publicly accessible❞
  • đŸ€·đŸ«€ ❝The website under test is publicly accessible but dark patterns or similarly unsuitable/inappropriate anti-user mechanisms were detected. The website should be avoided.❞
  • ❌ ❝The website under test is access-restricted or not entirely publicly accessible❞

The report could perhaps be timestamped, digitally signed by the entity running the app, and centrally recorded. Then concerned people among the public could use the report as an independent/authoritative source for claiming that a “public” resource is not actually public.

3
1

Some ATM machines demand your PIN as the very first step. More privacy-respecting machines let you enter your order details first and PIN entry is the last step. So if the order cannot be filled (e.g. not enough banknotes, not the preferred denominations, no balance inquiry service, printer out of paper, etc), then you can get your card back without having the card read.

When the PIN is demanded 1st, we can only wonder what information is being collected and what is being needlessly sent over the network. Some machines apparently silently check your balance/credit line automatically (if it’s a legit purpose, perhaps it tries to avoid offering an preselected withdrawal amount that is over a limit).

Some ATMs will reject a card instantly, before entering anything. Then they report a bogus error (“card fault”), which would seem to violate the GDPR.

So this is why we need a FOSS app for smartcard readers. Something that will show you what information is available on your bank card without entering your PIN, and then show you what additional information is available using your PIN. Ultimately, it would be useful to know whether to reject ATMs that demand a PIN as a 1st step.

4
1

Youtube has become hostile toward the Invidious community and also direct users over Tor. At the same time, YT is no longer purely some entertainment platform cat video frills. Governments and public schools are putting public content on YT that everyone should be entitled to access.

Google’s attack on Invideous involves targetting public servers. But Google cannot block everyone. So we need a killer FOSS p2p app that combines torrenting with conventional fetching. It has to be good enough that quasi normies are willing to run it.

The app should find, leech, and seed a torrent based on the video ID. At the same time it should do a parallel fetch using yt-dlp. If blocked by Google, the user can still get fed by the torrent. It only takes one unblocked user to seed the torrent.

Ideally the trackers are onion based. Perhaps to go easy on the network, the onion torrenting should be limited to low-res open format variants.

5
1
submitted 3 weeks ago* (last edited 3 weeks ago) by evenwicht@lemmy.sdf.org to c/foss_requests@libretechni.ca

Many color printers surreptitiously print a stegonographic watermark with yellow dots to make all documents traceable to their source. Some threads covering this:

proposal:

In principle, a color printer could be made to print a blank page so that the page only contains the MIC watermark. It could then be scanned at 600+ DPI. A FOSS app could analyze the dot spacing and work out where to add dots to the uniform grid and produce a mask that can be easily layered on to all documents using ImageMagick.

6
1

Something like the “unwaffle” tool on https://goblin.tools/Formalizer would be useful to have so that large amounts of text can be condensed without relying on this cloud service.

For example, it would be useful for something like the Lemmyverse search tool, which I expand on here.

7
1

MythTV is a great tool for browsing broadcast TV schedules and scheduling recordings. It’s a shame so many people have suckered for cloud streaming services, which have a subscription cost and yet they collect data on you regardless. Broadcast TV lately has almost no commercial interruptions and of course no tracking. It’s gratis as well. If they bring in commercials, MythTV can auto-detect them and remove them.

FM and DAB radio signals include EPG. So the scheduling metadata is out there. But apparently no consumer receivers make use of it. They just show album art.

There are no jazz stations where I live. Only a few stations which sometimes play jazz. It’s a shame the EPG is not being exploited. Broadcast radio would be so much better if we could browse a MythTV schedule and select programs to record.

I suppose it’s not just a software problem. There are FM tuner USB sticks (not great). Nothing for DAB. And nothing comparable to the SiliconDust designs, which are tuners that connect to ethernet.

8
1

I have ongoing business with: banks, telecoms, energy suppliers, cloud services, etc.

They all have dynamic terms of service (ToS) and privacy policies. They may or may not notify me when they change it. If they bother to notify me, the msg always reads like this: “we are making changes to benefit you
” Yeah, bullshit. These notices never give the useful details. They hide them. Corporations don’t want you to be aware of how they are going to fuck you over more in the future.

The fix seems simple: we have a tool that once per month fetches the terms of service and privacy policies for all the suppliers we have a relationship with. The tool could extract the text and check it into a local git repo. Another tool could diff the different versions and feed that into an AI program that tells you in plain English what changes. It could even add a bit of character and say “Next month we’re going fuck you more by increasing penalties for late payments and shortening the grace period”.

It would also be useful if the AI would input the whole privacy policy and produce a Cliff’s Notes extraction of what’s important. It could take care to detect weasel wording and give the honest meaning (like when the policy says “we only share your personal data when legally permitted”, which really means “we pawn your ass to the full extent legally possible”.

Another nice to have feature: you feed it the privacy policies of 10 different banks, and it compares them and produces a detailed report that ranks them on the extent of the privacy abuses.

9
1

cross-posted from: https://linkage.ds8.zone/post/515550

Folks— Most Lemmy client apps are on the phone. I am looking for a FOSS phone app that works offline. That is, the phone has no data plan and only occasionally connects to public wifi hotspots. I do not want to be entering login passwords and reading and writing posts when I am connected. Reading and writing posts interactively needs to happen offline. Typical workflow: when I meet people at a bar/cafe, I need the app to sync over the public wi-fi without using my attention. It should post my comments and fetch threads for which I am active, for offline access later. It needs to support multiple accounts spanning multiple instances.

Does anything like this exist? Or do all Lemmy/kbin/mbin phone apps demand your realtime attention when connected?

10
1

Suppose you are about to travel to some unfamiliar city, perhaps abroad. You don’t want to just show up without being informed or you might overlook some great restaurant or bar. In principle it would be useful to have an app that visits the websites of all (or select) restaurants in your destination city before you go. It could harvest all the PDF menus.

The train or whatever mode of transport may not have wi-fi, but having an offline collection of PDFs can be a good way to get informed offline and decide where to go.

If such an app would exist, restaurant owners would be encouraged to post PDF versions of their menus on the web.

The list of websites could be grabbed from OSM. Restaurants likely have to be licensed in some way by the gov or hygiene regulator, which could also be a source for website URLs (not sure).

11
1

There are probably thousands of LaTeX packages many of which are riddled with bugs and limitations. All these packages have an inherent need to interoperate and to be used together unlike any other software. Yet there are countless bizarre incompabilities. There are various situations where two different font packages cannot be used in the same document because of avoidable name clashes. If multiple different packages use a color package with different options, errors are triggered about clashing options when all the user did was simply use two unrelated packages.

Every user must do a dance with all these unknown bugs. Becoming proficient with LaTeX entails an exercise of working around bugs. Often the sequence of \usepackage makes the difference between compilation and failure, and the user must guess about which packages to reorder.

So there is a strong need for a robust comprehensive bug tracking system. Many of the packages have no bug tracker whatsoever. Many of those may even be unmaintained code. Every package developer uses the bug tracker of their choice (if they bother), which is often Microsoft Github’s walled garden of exclusion.

Debian has a disaster of its own w.r.t LaTeX

Debian bundles up the whole massive monolithic collection of LaTeX packages into a few texlive-* packages. If you find a bug in a pkg like csquotes, which maps to texlive-latex-extra and you report a bug in the Debian bug tracker for that package, the Debian maintainer is driven up the wall because one person has 100s/1000s of pkgs to manage.

It’s an interesting disaster because the Debian project has the very good principle that all bugs be reportable and transparent. Testers are guided to report bugs in the Debian bug tracker, not upstream. It’s the Debian pkg manager’s job to forward bugs upstream as needed. Rightly so, but there is also a reasonable live-and-let-live culture that tolerates volunteer maintainers using their own management style. So some will instruct users to directly file bugs upstream.

Apart from LaTeX, it’s a bit shitty because users should not be exposed to MS’s walled garden which amounts to bug supression. But I can also appreciate the LaTeX maintainer’s problem.. it’d be virtually humanly unsurmountable for a Debian maintainer to take on such a workload.

What’s needed

  • Each developer of course needs control of their choice of git and bug tracker, however discriminatory the choice is -- even if they choose to have no bug tracker at all.
  • Every user and tester needs a non-discriminatory non-controversial resource to report bugs on any and all LaTeX packages. They should not be forced to lick Microsoft’s boots (if MS even allows them).
  • Multiple trackers need a single point of review, so everyone can read bug reports in a single place.

Nothing exists that can do that. We need a quasi-federation of bug trackers giving multiple places to write bug reports and a centralised resource for reviewing bug reports. Even if a package is abandoned by a maintainer, it’s still useful for users to report bugs and discuss workarounds (in fact, more importantly so).

The LaTeX community needs to solve this problem. And when they do, it could solve problems for all FOSS not just LaTeX.

(why this is posted to !foss_requests@libretechni.ca: even though a whole infrastructure is needed, existing FOSS does not seem to satisfy it. Gitea is insufficient.)

12
1
submitted 3 months ago* (last edited 3 months ago) by nonserf@libretechni.ca to c/foss_requests@libretechni.ca

The websites of trains, planes, buses, and ride shares have become bot-hostile and also tor-hostile. This forces us to make a manual labor-intensive effort of pointing and clicking through shitty proprietary GUIs. We cannot simply query for the cheapest trip over a span of time for specified parameters of our choice. We typically must also search one day per query.

Suppose I want to go to Paris, Lyon, Lille, or Marseilles, and I can leave any morning in the next 2 weeks. Finding the cheapest ticket requires 56 manual web queries (4 destinations × 14 days). And that’s for just one carrier. If I want to query both Flixbus and BlaBlaCar, we’re talking 112 queries. Then I have to keep notes - a shortlist of prospective tickets. Fuck me. Why do people tolerate this? (They probably just search less and take a suboptimal deal).

If we write web scraping software, the websites bogart their inventory with anti-bot protectionist mechanisms that would blacklist your IP address. Thereafter, we would not even be able to do manual searches. So of course a bot would have to run over Tor or a VPN. But those IPs are generally blocked outright anyway.

The solution: MitM software

We need some browser-independent middleware that collects the data and shares it. Ideally it would work like a special purpose socat command. It would have to do the TLS handshake with the travel site and offer a local unencrypted port for the GUI browser to connect to. That would be a generic tool comparable to Wireshark (or perhaps #Wireshark can even serve this purpose?) Then a site-specific program could monitor the traffic, parse it, and populate a local SQLite DB. Another tool could sync the local DB with a centralised cloud DB. A fourth tool could provide a UI to the DB that gives us the queries we need.

A browser extension that monitors and shares would be an alternative solution -- but not as good. It would impose a particular browser. And it would be impossible to make the connection to the central DB over Tor while making the browser connection over a different network.

Fares often change daily, so the DB would of course timestamp fares. Perhaps an AI mechanism could approximate the price based on past pricing trends for a particular route. A Flixbus fare will start at 10 but climb to 40 on the day of travel. Stale price quotes would obviously be inexact but when the DB shows an interesting price and you search it manually, the DBs would be updated. The route and schedule info would of course be quite useful regardless (and unlikely stale).

The end result would be an Amadeus DB of sorts, but with the inclusion of environmentally sound ground transport. It could give a direct comparison and perhaps even cause air travelers to switch to ground travel. It could even give us a Matrix ITA Software UI/query tool that’s more broad.

Request creation of free software missing from the 🄯ommons đŸ—œđŸ§đŸ

15 readers
2 users here now

This is a place to ask whether free open source software exists for a particular purpose. If it’s non-existent, specify your software requirements here so your dream can be well articulated for everyone to either laugh at or share the dream and give moral support for you to create it yourself. Or you might even to pitch the idea so well that a developer loves the idea enough to run off and build it for you.

No other community exists for this purpose but there are some loosely related ones. If existing software closely delivers what you need but is missing a feature, you might post a wishlist/feature request here or in !bugs@sopuli.xyz.

The FSF has a software directory that can help with finding software.

Loosely related decentralised communities generally for FOSS:

There is also foss@beehaw.org but I don’t recommend it because the mod is trigger happy with censorship. E.g. if you post about FOSS advocacy it will get removed as it does not relate to any particular FOSS application. There are also many more FOSS forums duplicated in centralised places which are not conducive to the digital rights spirit of free open source software, so they are omitted.

founded 3 months ago
MODERATORS