[-] evenwicht@lemmy.sdf.org 1 points 19 hours ago* (last edited 18 hours ago)

If bank incompetence costs you time (because someone like me won’t let incompetence fly), consider changing banks, if you’re not trapped.

[-] evenwicht@lemmy.sdf.org 1 points 19 hours ago* (last edited 18 hours ago)

In no way are you trapped with a shitty bank.

You’ve obviously never been a vegabond. And probably also don’t have much of a grasp on the KYC shit-show pushed by banks now.

No customer service isnt cheap, and you’re the guy making wait times over an hour.

Happy to do so. There is a shortage of consumers who won’t let incompetence fly. Like I said, it’s a shit bank. If the service downgrades because of something I did and other customers bounce because of it, this is a good thing. I couldn’t hope for more.

OTOH, the banker learned from me how checks work, and thus will be more competent the next time a customer would otherwise get boned by a banker who thinks debts vanish when a check expires. The banker’s training came from a customer taking up a phone slot, but that banker will be more effecient on this topic and faster going forward.

[-] evenwicht@lemmy.sdf.org 1 points 19 hours ago

Of course being ethical is almost always at odds with convenience. Corporations know this, and they exploit it. Most people have not read Tim Wu’s “Tyranny of Convenience” essay. If folks were not so clung on to convenience, corporations would get away with fewer shenanigans.

[-] evenwicht@lemmy.sdf.org 13 points 2 days ago* (last edited 2 days ago)

If you think it’s over the money, you’ve missed the plot.

There is an ethical problem with how they operate. If you let them get away with their shenanigans, you support them. I will not. Fuck banks. And fuck their shenanigans. When they pulled this shit, it became my ethical duty to cost them. Their postage cost exceeds the value of the check, and their phone operator costs are high. So I’m happy to ensure their profit-driven exploitation backfires fully.

Mobile deposits: most banks have scrapped remote deposits via web. Most banks are happy to exclude those not on their exclusive smartphone ecosystem and try to push you into Google’s walled garden to obtain their forced-obsolescence app (so Google can know where you bank after getting a mobile phone subscription in order to activate a Google acct). Anything to cattle-herd boot lickers onto the bank’s closed-source spyware app is part of their game. The ethical problems with this could fill a book.

I tried hacking together an Android emulator to take a JPG of a check and emulate the camera within the android v/m using the linux gstreamer tool. I tried that back when I was willing to briefly experiment with a closed-source bank app I exfiltrated using Raccoon. Shit didn’t work with the banking app.. it was too defensive. I was lucky the app even ran on the emulator. Many banking apps detect the emulator and refuse to run.

Can’t reach an ATM for deposits from overseas. But also, when I am in the country, it’s a long drive from the house to an ATM.

So deposits by mail are the most sensible in my situation.

They fucked up. They made you whole.

The idiot who charged the interest was just the first fuckup. And it’s not a significant fuckup. The notable fuckup here is the deliberate corporate-wide policy in how they deal with small credits that leads to a paper check in the mail. It’s the shitty policy that disables them from fixing their fuckups. A fuckup is fine if they can fix sensibly. But this is not the case here.

IIUC, it’s what the Scots call a running goat fuck.. which is fuck up after fuck up on top of fuck ups.

24

I’m trapped with a shitty bank and won’t go into the reasons here.

This is what happened: I called to pay the bill. The idiot at the bank gave a pay-off quote that included interest charges of less than $2, even though the bill was not due yet. There should be no interest in this case. He would not listen. So I’m like, fuck it, I’ll pay what he quotes and then dispute the interest charge when it comes.

The billing system did the right thing.. did not charge interest. So of course I ended up with a tiny credit <$2. The asshole bank could not just let that small credit sit because there is a business advantage if they zero out all positive balances to increase the chances of a negative the next month. So they mailed a paper check for the credit.

It’s not worth my time and effort to cash a check so small. But I’ll also be damned if I let that be a donation to the bank. So I just sat on the check until it became stale and worthless. The check is bad, but the bank still owes me the money. So then I call the bank to say: hey, don’t bother sending another check, just credit my account with that amount, toward by current balance. The banker refused. In fact, the banker tried to say the money was gone -- that I lose it because it’s my fault the check is bad. I know that’s not how it works. The check goes bad but the debt does not. The bank still owes me the money. Customer service genuinely seemed clueless about that.

I spent 90 minutes on the phone arguing over this. Customer rep had to repeatedly check with management. In the end, the bank still refused to credit the account but they agreed to send another check. WTF. I guess I will just repeat the pattern until they learn.

Customer service is not cheap. Someone once told me what the bank pays per minute on phone support. I don’t recall what the figure was but it was shockingly high. I wonder how much this tiny check will cost the bank as it sits in limbo and causes repeat customer service calls, in a loop.

5

Some obnoxious piece of shit scumbag has been non-stop attacking Debian testers with a flood of spam the past several days.

Why attack a harmless charity? Why not target a shitty expoitive platform like MS Github instead? It’s choosing to target an organisation that just helps people. It would be like entering a public library and taking a shit on the carpet, when you could have just as well taken a shit on the hood of Elon Musk’s car hood instead, for example.

This is not just a rant. I want a serious answer. I have a vague notion of why spam exists. It’s not just malice without purpose. Cyber criminals build botnets to effectively create powerful supercomputers for very little money by hijacking their victim’s CPU cycles and the energy that drives them. Then they sell supercomputing access on the black market, or they mine cryptocurrency. Those criminal botnets need to be controlled surreptitiously.

Spam somehow facilitates the command and control of the botnets. Supposedly… though I struggle to understand why. When spam is sent to some arbitrary recipient, how does that serve as a command signal? Is it perhaps a mechanism that only serves the botnet if the spam recipient happens to be unwittingly part of the botnet? Thus everyone who is not on the botnet who receives spam, it’s just a waste for everyone?

Surely there are more clever ways to anonymously control a botnet without shitting on the world with spam. Surely the rage spam causes motivates intelligence agencies to shut them down, no?

1990s botnets were clumbsy. The greed was unhinged so they would steal as many CPU cycles from each PC as possible. When someone’s PC is so sluggish it’s intolerably dysfunctional, it was a big red flag. The victim reinstalled Windows to overcome and thus shrinks the botnet, which increases the botnet owner’s burden of having to reinfect more PCs. So they got more clever.. they only steal enough processing to go unnoticed. I’m not seeing how spam fits into modern day clever crimes.

spoiler


15
submitted 1 week ago* (last edited 1 week ago) by evenwicht@lemmy.sdf.org to c/infosec@infosec.pub

Before sharing my email address with some person or some org, I do an MX DNS lookup on the domain portion of their email address. It’s usually correct. That is, if the result is not of the form *.mail.protection.outlook.com, then that recipient is not using Microsoft’s mail server.

But sometimes I get stung by an exception. The MX lookup for one recipient yielded barracudanetworks.com, so I trusted them with email. But then they sent me an email and I saw a header like this:

Received: from *.outbound.protection.outlook.com (*.outbound.protection.outlook.com…

Is there any practical way to more thoroughly check whether an email address leads to traffic routing through Microsoft (or Google)?

5
submitted 2 weeks ago* (last edited 2 weeks ago) by evenwicht@lemmy.sdf.org to c/right_to_unplug@sopuli.xyz

I’m working on a campaign against the use of Facebook by gov administrations. So far I have like 20 or so pages covering human rights violations by the gov when they impose the use of Facebook. But I have not yet written anything about addiction or mental health in this context.

I have never used Facebook myself, so I’m working somewhat blind. The question is whether Facebook is addictive and ultimately to what extent can it be faulted for mental health issues. I mean, of course it’s addictive to some extent, as is just about everything and anything. But the question is whether it can reasonably be argued that when a government pushes the use of Facebook onto people, is the gov significantly undermining people’s human right to living in good health? Or is that a far-fetched or crazy enough that it would actually dilute the campaign against gov-forced use of FB?

2
submitted 2 weeks ago by evenwicht@lemmy.sdf.org to c/tor@infosec.pub

There are countless public wi-fi access points that push captive portals which collect identity info on users and track them. The purpose of the privacy intrusion is (allegedly) so they can respond to complaints about unacceptable use. Or worse, so they can directly snoop on their own users activity to police their behavior. Those burdens are not cost-free. Babysitters cost money.

Tor solves this problem. There can be no expectation that a service provider nanny Tor users because they naturally cannot see what users are doing. You are only responsible for what you know -- and for what data you collect. The responsibility of Tor users falls on the exit nodes (to the extent they are used, as opposed to onions).

It’s bizarre how public access admins often proactively block egress Tor traffic, out of some ignorant fear that they would be held accountable for what the user does. It’s the complete opposite. Admins /shed/ accountability for activity that they cannot monitor. If it’s out of their hands, it’s also beyond their responsibility. This is Infosec Legal Aspects 101 -- don’t collect the info if you don’t want the responsibility that the data collection brings. Somehow most of the population has missed that class and remains driven by FUD instead. They foolishly do the opposite: copious overcollection, erroneously thinking that’s the responsible thing to do.

In principle, if you want to deploy gratis Internet access to a population free of captive portals and with effortless administration that respects the privacy of users, then it is actually clearnet traffic that you would block. If you allow only Tor traffic, you escape the babysitter role entirely.

In thinking about how to configure this, first thought was: setup a Tor middlebox transparent proxy and force all traffic over Tor. The problem with that is you would actually still have visibility on the traffic before it gets packaged for Tor, so it fails in the sense that you could technically be held liable for not babysitting the traffic between the user and the Tor network. OTOH, the chances of receiving a complaint from the other side of the Tor cloud are naturally quite low. Still, it’s flawed.

It really needs to be a firewall that blocks all except Tor guard nodes. A “captive portal” of sorts could be used to inform clearnet users that only Tor traffic is permitted, which could give some basic advice about Tor, such as local workshops on installing a Tor client.

It imposes a barrier to entry of both knowledge and wisdom on users. So be it; it is what it is. Not everyone can expect a free hand-out, and it’s usually Tor users to face the oppression of access denial. Of course the benefit is that some people will decide to install Tor in order to use the hotspot.

8
submitted 2 weeks ago* (last edited 2 weeks ago) by evenwicht@lemmy.sdf.org to c/right_to_unplug@sopuli.xyz

I’ve pulled the plug on the Internet. After a few months of being offline, these are my findings:

  • Love DAB radio; even in a region with only one English station, it’s enough to get my news. Very grateful for BBC!
  • Great to exercise the power of boycott and say “fuck you” to shitty ISPs. In the US, most ISPs support the republicans. And most ISPs worldwide do not accept cash payments (thus support the oppression of forced-banking).
  • Very grateful for some¹ public libraries with truly open and anonymous wi-fi. (¹ some is stressed b/c sadly most public libraries outsource Internet to shitty big corps and are elitist enough to deny wi-fi to those who lack a GSM subscription [i.e. those who most need wi-fi], and some libs also block egress Tor [indeed they are naïve about how liability and accountability works])
  • Web enshitification has less of an impact when just getting the web in small doses as a periodic library visit.
  • No more wasting time doom scrolling.
  • Money saving. Broadband costs are unreasonable in most parts of the world.
  • Sending postal mail instead of e-mail is liberating, as it cuts Microsoft out of the loop (almost all businesses and gov offices use MS email). Also fun to typeset letters in LaTeX.
  • ArgosTranslate enables offline people to machine-translate documents. This is great for privacy anyway, because it’s a bad idea to trust the cloud with translating personal docs you get in the mail.

Shortcomings:

  • Severe lack of offline apps. In the 90s and 2000s when many people had spotty access, apps were more accommodating of that. There are no Mastodon, Lemmy, or Kbin apps to facilitate offline reading and writing, and periodic syncing.
  • Most websites are now designed to assume everyone has 24/7 access. Coupled with an unhealthy and short-sighted hostility toward bots, webpages are rich with JS. They are a shit-show to download and tend not to make content easily fetchable for later consumption.
  • Can be tedious to find open hotspots outside of libraries where you can make enough noise to make a VOIP call. (UPDATE: fortunately hospitals tend to have open wi-fi access and generally no noise constraints. Some libraries have a lobby where VOIP calls can be made)

I could really use a way to synchronize posts and messages (XMPP, Lemmy, Mastodon, e-mail) with a smartphone, and then to synchronize the phone with a PC. This would really cut down on having to lug a laptop around. An Android app would serve the most people, but it’d perhaps be easier to implement on a linux-based phone like PostmarketOS.

Advice if you want to try unplugging, in baby steps

A non-stop broadband contract with continuous billing setup is designed to be inconvenient to stop. Perhaps there is a threat of startup costs if you want to return to their service, and pains of returning equipment. Bear in mind they are exploiting your auto-pilot comfort by giving startup discounts to new customers but not to their loyal boot-lickers. You can probably save money if you’re willing to bounce around to other providers anyway.

Find a cheap prepaid mobile data package and make your phone a hotspot. Or if you are more advanced get an LTE USB modem that plugs into a router that supports a GSM uplink. “Cheap” in this case does not mean cheap per meg -- it means cheaper per month if you can greatly reduce your consumption by doing things like killing the graphics on your web browser. If you have enough discipline you can get by on ~5gb/month for probably around $5—10. It’s enough for basic comms.

When your 5gb (or whatever) of mobile data runs out, don’t topup right away. See how long you can hold out. Use the library wifi. I would have a week of offline time after my data runs out before topping up. Then each cycle that timespan grew. Now I have been offline for months.

Prepaid mobile broadband is a good middle step because you are not pushed to stay on an auto-pilot plan. It’s actually the opposite.. you have the inconvenience of topping up each time you need to continue your access, which is perfect for a progression into offlineness.

18
submitted 2 weeks ago* (last edited 2 weeks ago) by evenwicht@lemmy.sdf.org to c/dabradio@feddit.uk

I wonder what is the rationale. Will be shitty if all FM radios become bricks because of some stupid push to obsolete them.

Or is there a smarter plan, like using the bandwidth for something more worthy than FM radio?

UPDATE

Some more FM advantages:

Fast channel changing.

Better reception. There is a station in my city that transmits both DAB and FM. Their DAB signal has chronic cut-outs but their FM station is good enough. And in general, weak FM signals are still useful while weak DAB signals are unusable. So a DAB-only policy marginalises people who live remote from the cities.

8

cross-posted from: https://lemmy.sdf.org/post/36402193

The knee-jerk answer when an app pushes designed obsolescence by advancing the min Android API required is always “for security reasons…” It’s never substantiated. It’s always an off-the-cuff snap answer, and usually it does not even come from the developers. It comes from those loyal to the app and those who perhaps like being forced to chase the shiny with new phone upgrades.

Banks, for example, don’t even make excuses. They can just neglect to be mindful of the problem and let people assume that some critical security vuln emerged that directly impacts their app.

But do they immediately cut-off access attempts on the server-side that come from older apps? No. They lick their finger and stick it in the air, and say: feels like time for a new version.

It’s bullshit. And the pushover masses just accept the ongoing excuse that the platform version must have become compromised to some significant threat -- without realising that the newer version bears more of the worst kinds of bugs: unknown bugs, which cannot be controlled for.

Banks don’t have to explain it because countless boot-licking customers will just play along. After all, these are people willing to dance for Google and feed Google their data in the first place.

But what about FOSS projects? When a FOSS project advances the API version, they are not part of the shitty capitalist regime of being as non-transparent as possible for business reasons. A FOSS project /could/ be transparent and say: we are advancing from version X to Y because vuln Z is directly relevant to our app and we cannot change our app in a way that counters the vuln.

The blame-culture side-effect of capitalism

Security analysis is not free. For banks and their suppliers, it is cheaper to bump up the AOS API than it is to investigate whether it is really necessary.

It parallels the pharmacutical industry, where it would cost more to test meds for an accurate date of expiry. So they don’t bother.. they just set an excessively safe very early expiration date.

Android version pushing is ultimately a consequence of capitalist blame-culture. Managers within an organisation simply do not want to be blamed for anything because it’s bad for their personal profit. Shedding responsibility is the name of the game. And outsourcing is the strategy. They just need to be able to point the blame away from themselves if something goes wrong.

Blindly chasing the bleeding-edge latest versions of software is actually security-ignorant¹ but upper management does not know any better. In the event of a compromise, managers know they can simply shrug and say “we used the latest versions” knowing that upper managers, shareholders, and customers are largely deceived into believing “the latest is the greatest”.

¹ Well informed infosec folks know that it’s better to deal with the devil you know (known bugs) than it is to blindly take a new unproven version that is rich in unknown bugs. Most people are ignorant about this.

Research needed

I speak from general principles in the infosec discipline, but AFAIK there is no concrete research specifically in the context of the onslaught of premature obsolescence by Android app developers. It would be useful to have some direct research on this, because e-waste is a problem and credible science is a precursor to action.

7

cross-posted from: https://lemmy.sdf.org/post/36402193

The knee-jerk answer when an app pushes designed obsolescence by advancing the min Android API required is always “for security reasons…” It’s never substantiated. It’s always an off-the-cuff snap answer, and usually it does not even come from the developers. It comes from those loyal to the app and those who perhaps like being forced to chase the shiny with new phone upgrades.

Banks, for example, don’t even make excuses. They can just neglect to be mindful of the problem and let people assume that some critical security vuln emerged that directly impacts their app.

But do they immediately cut-off access attempts on the server-side that come from older apps? No. They lick their finger and stick it in the air, and say: feels like time for a new version.

It’s bullshit. And the pushover masses just accept the ongoing excuse that the platform version must have become compromised to some significant threat -- without realising that the newer version bears more of the worst kinds of bugs: unknown bugs, which cannot be controlled for.

Banks don’t have to explain it because countless boot-licking customers will just play along. After all, these are people willing to dance for Google and feed Google their data in the first place.

But what about FOSS projects? When a FOSS project advances the API version, they are not part of the shitty capitalist regime of being as non-transparent as possible for business reasons. A FOSS project /could/ be transparent and say: we are advancing from version X to Y because vuln Z is directly relevant to our app and we cannot change our app in a way that counters the vuln.

The blame-culture side-effect of capitalism

Security analysis is not free. For banks and their suppliers, it is cheaper to bump up the AOS API than it is to investigate whether it is really necessary.

It parallels the pharmacutical industry, where it would cost more to test meds for an accurate date of expiry. So they don’t bother.. they just set an excessively safe very early expiration date.

Android version pushing is ultimately a consequence of capitalist blame-culture. Managers within an organisation simply do not want to be blamed for anything because it’s bad for their personal profit. Shedding responsibility is the name of the game. And outsourcing is the strategy. They just need to be able to point the blame away from themselves if something goes wrong.

Blindly chasing the bleeding-edge latest versions of software is actually security-ignorant¹ but upper management does not know any better. In the event of a compromise, managers know they can simply shrug and say “we used the latest versions” knowing that upper managers, shareholders, and customers are largely deceived into believing “the latest is the greatest”.

¹ Well informed infosec folks know that it’s better to deal with the devil you know (known bugs) than it is to blindly take a new unproven version that is rich in unknown bugs. Most people are ignorant about this.

Research needed

I speak from general principles in the infosec discipline, but AFAIK there is no concrete research specifically in the context of the onslaught of premature obsolescence by Android app developers. It would be useful to have some direct research on this, because e-waste is a problem and credible science is a precursor to action.

[-] evenwicht@lemmy.sdf.org 8 points 2 weeks ago

Diligent consumers don’t do that. They pay their bill off faster than fees can be incurred. It’s the other consumers, the undisciplined and the poor, who get sucked dry by fees. These are not the demographic of international travelers. One demographic is subsidizing another.

The interesting thing is that if you’re in the diligent demographic, you can make the shitty bank lose money. Profit from those they exploit is the same whether you create a loss for the bank or not.

10

The knee-jerk answer when an app pushes designed obsolescence by advancing the min Android API required is always “for security reasons…” It’s never substantiated. It’s always an off-the-cuff snap answer, and usually it does not even come from the developers. It comes from those loyal to the app and those who perhaps like being forced to chase the shiny with new phone upgrades.

Banks, for example, don’t even make excuses. They can just neglect to be mindful of the problem and let people assume that some critical security vuln emerged that directly impacts their app.

But do they immediately cut-off access attempts on the server-side that come from older apps? No. They lick their finger and stick it in the air, and say: feels like time for a new version.

It’s bullshit. And the pushover masses just accept the ongoing excuse that the platform version must have become compromised to some significant threat -- without realising that the newer version bears more of the worst kinds of bugs: unknown bugs, which cannot be controlled for.

Banks don’t have to explain it because countless boot-licking customers will just play along. After all, these are people willing to dance for Google and feed Google their data in the first place.

But what about FOSS projects? When a FOSS project advances the API version, they are not part of the shitty capitalist regime of being as non-transparent as possible for business reasons. A FOSS project /could/ be transparent and say: we are advancing from version X to Y because vuln Z is directly relevant to our app and we cannot change our app in a way that counters the vuln.

The blame-culture side-effect of capitalism

Security analysis is not free. For banks and their suppliers, it is cheaper to bump up the AOS API than it is to investigate whether it is really necessary.

It parallels the pharmacutical industry, where it would cost more to test meds for an accurate date of expiry. So they don’t bother.. they just set an excessively safe very early expiration date.

Android version pushing is ultimately a consequence of capitalist blame-culture. Managers within an organisation simply do not want to be blamed for anything because it’s bad for their personal profit. Shedding responsibility is the name of the game. And outsourcing is the strategy. They just need to be able to point the blame away from themselves if something goes wrong.

Blindly chasing the bleeding-edge latest versions of software is actually security-ignorant¹ but upper management does not know any better. In the event of a compromise, managers know they can simply shrug and say “we used the latest versions” knowing that upper managers, shareholders, and customers are largely deceived into believing “the latest is the greatest”.

¹ Well informed infosec folks know that it’s better to deal with the devil you know (known bugs) than it is to blindly take a new unproven version that is rich in unknown bugs. Most people are ignorant about this.

Research needed

I speak from general principles in the infosec discipline, but AFAIK there is no concrete research specifically in the context of the onslaught of premature obsolescence by Android app developers. It would be useful to have some direct research on this, because e-waste is a problem and credible science is a precursor to action.

16

Europe has a legal cap (0.9%) on the fee the credit card companies charge to the merchants. In the US there is no limit, so merchants get hammered with fees of ~3—5%. US credit cards often offer a 1% kickback to cardholders for using their card. Some credit cards offer as much as 5% as a kickback on certain categories of purchases, like groceries. Some credit cards also charge a zero percent markup on foreign currency exchange.

So if you use a forex-free card with rewards in Europe on a purchase that has a rebate that exceeds 1%, the merchant only absorbs 0.9% of the cost. The bank loses 4.1% on a 5% rebate.

Or am I missing something? The bank obviously still profits from purchases in categories with a lower rebate, and late fees and interest.. but of course only if you make that happen.

9
submitted 2 weeks ago* (last edited 2 weeks ago) by evenwicht@lemmy.sdf.org to c/right_to_unplug@sopuli.xyz

cross-posted from: https://lemmy.sdf.org/post/36391484

Broadcast TV has their shit together, at least in the US. You can setup MythTV to fetch TV schedules without Internet access. It can grab the schedules from the broadcast signals. You can also subscribe to Internet services that give TV scheduling far into the future, but that’s a non-gratis frill. The in-band scheduling info goes a few days out which is good enough.

Radio listeners are screwed on this. FM and DAB+ both have no scheduling info. And worse, there is no Internet service that produces an aggregated radio schedule. You must find websites hosted by each radio station individually and navigate in their shitty user interfaces. Sometimes the programs are too vague to be useful.

Apparently it was completely overlooked in the drafting of the DAB specs. In principle, a clever broadcaster could embed schedule info into the album art using stegonography, or stego on the audio content, but then no appliances would decode such hacks.

I have no idea if satellite radio is on the ball. I think satellite radio is a US-specific option as DAB is nearly non-existent in the US. Vice-versa in Europe.

As someone who has pulled the plug on the residential Internet, I cling to the radio more than most. If DAB would were to include metadata and if there were a DAB-capable PC card, it would be great to have a MythTV-like setup to record radio programs. As it stands, we are driven to do a lot of channel surfing, which is worse on DAB than on FM because of the 2½ second delay with each channel change to decode a chunk of data (so surfing 10 channels has 25 seconds of silent timewaste).

I’m sure radio broadcasters would get more market share if DARs (digital audio recorders) were a thing. That sort of utility might even enable more people to be willing to experiment with unplugging from the Internet.

Update

The scheduling info is in fact part of a standard called SPI:

https://lemmy.sdf.org/post/36391484/20629029

It’s just that device makers are not bothering to implement it, apparently.

[-] evenwicht@lemmy.sdf.org 10 points 2 months ago

I’ll have a brief look but I doubt ffmpeg would know about DVD CSS encryption.

[-] evenwicht@lemmy.sdf.org 19 points 4 months ago* (last edited 4 months ago)

If anyone is writing or maintaining a playbook/handbook for how to run an authoritarian regime, removing open data would be a play to add.

[-] evenwicht@lemmy.sdf.org 8 points 5 months ago* (last edited 5 months ago)

It’s possible that it’s an accident, but unlikely IMO. The accidental case is overload and timing fragility. Tor introduces a delay, so if a server already has a poor response time and the user’s browser has a short timeout tolerance, then it’s a recipe for a timeout. Firefox does better than Chromium on this (default configs). But I tried both browsers. At the state level I think they made a concious decision to drop packets.

It’s also possible that they are not blocking all of Tor but just the exit node I happened to use. I did not exhaustively try other nodes but I was blocked two different days (thus likely two different nodes). In any case, this forum should help sort it out. Anyone can chime in with other demographics who are blocked, or tor users that are not blocked.

(edit) ah, forgot to mention: www.flsenate.gov also drops Tor packets.

[-] evenwicht@lemmy.sdf.org 8 points 5 months ago* (last edited 5 months ago)

infosec 101:

  • confidentiality
  • integrity
  • availability

If users who should have access (e.g. US taxpayers) are blocked, there is an availability loss. Blocking Tor reduces availability. Which by definition undermines security.

Some would argue blocking Tor promotes availability because a pre-emptive strike against arbitrary possible attackers revents DoS, which I suppose is what you are thinking. But this is a sloppy practice by under-resourced or under-skilled workers. It demonstrates an IT team who lacks the talent needed to provide resources to all legit users.

A mom and pop shop, sure, we expect them to have limited skills. But the US federal gov? It’s a bit embarrassing. The Tor network of exit nodes is tiny. The IRS should be able to handle a full-on DDoS attempt from Tor because such an effort should bring down the Tor network itself before a federal gov website. If it’s fear of spam, there are other tools for that. IRS publications could of course be on a separate host than that which collects feedback.

[-] evenwicht@lemmy.sdf.org 8 points 5 months ago* (last edited 5 months ago)

This is not a news forum. It’s a boycott organisation and support forum. Do your boycotts tend to last less than 1 year? That’s not really impactful. (which is not to say impact is the only reason to boycott… I boycott just to ensure that I am not part of the problem, impact or not)

I have been boycotting Mars at least since 2018 when I found out they spent $½ million lobbying against GMO labeling in the US. Even if they were to turn that around and pay more money to lobby for GMO transparency, I would still boycott their vending machines. Not just because they got caught in a data abuse scandal, but because they lied about it, which means they cannot be trusted with technology.

[-] evenwicht@lemmy.sdf.org 7 points 7 months ago* (last edited 7 months ago)

Don’t Canadian insurance companies want to know where their customers are? Or are the Canadian privacy safeguards good on this?

In the US, Europe (despite the GDPR), and other places, banks and insurance companies snoop on their customers to track their whereabouts as a normal common way of doing business. They insert surreptitious tracker pixels in email to not only track the fact that you read their msg but also when you read the msg and your IP (which gives whereabouts). If they suspect you are not where they expect you to be, they take action. They modify your policy. It’s perfectly legal in the US to use sneaky underhanded tracking techniques rather than the transparent mechanism described in RFC 2298. If your suppliers are using RFC 2298 and not involuntary tracking mechanisms, lucky you.

[-] evenwicht@lemmy.sdf.org 16 points 7 months ago* (last edited 7 months ago)

You’re kind of freaking out about nothing.

I highly recommend Youtube video l6eaiBIQH8k, if you can track it down. You seem to have no general idea about PDF security problems.

And I’m not sure why an application would output a pdf this way. But there’s nothing harmful going on.

If you can’t explain it, then you don’t understand it. Thus you don’t have answers.

It’s a bad practice to just open a PDF you did not produce without safeguards. Shame on me for doing it.. I got sloppy but it won’t happen again.

view more: next ›

evenwicht

joined 1 year ago
MODERATOR OF