1
2
submitted 3 weeks ago by evenwicht@lemmy.sdf.org to c/tor@infosec.pub

There are countless public wi-fi access points that push captive portals which collect identity info on users and track them. The purpose of the privacy intrusion is (allegedly) so they can respond to complaints about unacceptable use. Or worse, so they can directly snoop on their own users activity to police their behavior. Those burdens are not cost-free. Babysitters cost money.

Tor solves this problem. There can be no expectation that a service provider nanny Tor users because they naturally cannot see what users are doing. You are only responsible for what you know -- and for what data you collect. The responsibility of Tor users falls on the exit nodes (to the extent they are used, as opposed to onions).

It’s bizarre how public access admins often proactively block egress Tor traffic, out of some ignorant fear that they would be held accountable for what the user does. It’s the complete opposite. Admins /shed/ accountability for activity that they cannot monitor. If it’s out of their hands, it’s also beyond their responsibility. This is Infosec Legal Aspects 101 -- don’t collect the info if you don’t want the responsibility that the data collection brings. Somehow most of the population has missed that class and remains driven by FUD instead. They foolishly do the opposite: copious overcollection, erroneously thinking that’s the responsible thing to do.

In principle, if you want to deploy gratis Internet access to a population free of captive portals and with effortless administration that respects the privacy of users, then it is actually clearnet traffic that you would block. If you allow only Tor traffic, you escape the babysitter role entirely.

In thinking about how to configure this, first thought was: setup a Tor middlebox transparent proxy and force all traffic over Tor. The problem with that is you would actually still have visibility on the traffic before it gets packaged for Tor, so it fails in the sense that you could technically be held liable for not babysitting the traffic between the user and the Tor network. OTOH, the chances of receiving a complaint from the other side of the Tor cloud are naturally quite low. Still, it’s flawed.

It really needs to be a firewall that blocks all except Tor guard nodes. A “captive portal” of sorts could be used to inform clearnet users that only Tor traffic is permitted, which could give some basic advice about Tor, such as local workshops on installing a Tor client.

It imposes a barrier to entry of both knowledge and wisdom on users. So be it; it is what it is. Not everyone can expect a free hand-out, and it’s usually Tor users to face the oppression of access denial. Of course the benefit is that some people will decide to install Tor in order to use the hotspot.

2
8
submitted 1 month ago by ciferecaNinjo@fedia.io to c/tor@infosec.pub

A Turk was telling me about a peaceful demonstration he attended, in Turkey. He said police surrounded the protest. Then someone in plain clothes threw a stone at the police. One of the demonstrators noticed that the guy who threw the stone had handcuffs in his back pocket. IOW, a cop posing as a demonstrator threw a stone in order to justify the police tagging the protest as “violent” so they could shut it down.

So of course the question is, to what extent are bad actors on Tor actually boot lickers who are working to ruin Tor for everyone?

3
8
submitted 1 month ago by ciferecaNinjo@fedia.io to c/tor@infosec.pub

There are many situations where gov-distributed public information is legally required to be open access. Yet they block Tor.

To worsen matters, the general public largely and naively believes it’s correct to call something as “open access” when in fact there are access restrictions in place.

The resource should work like this:

  1. User supplies an URL
  2. Robot tries to access that page from a variety of different countries, residential and datacenter IPs, Tor, various VPNs, different user-agent strings, etc.
  3. Report is generated that reports the site as “openly accessible” if no obsticles (like 403s) were detected. Otherwise tags the site as “restricted access” and lists the excluded demographics of people.

The report should be dated and downloadable as PDF so that activists can send it to the org behind site with a letter saying: “your website is not open access -- please fix”.

This need somewhat aligns with the mission of the OONI project, but they are not doing this AFAICT.

Update

I just read an announcement about Belgium’s “open data” law, which is basically a summary. It said something like “there should be no unnecessary access restrictions”. I’m not sure to what extent that accurately reflects the law, but it’s an example of what one country considers “open”, fwiw. From there, I would say most Tor blockades are not necessary but rather some lazy sysadmin looking for an easy job. They of course would then like to argue that it’s “necessary” to keep the baddies out.

Update 2

The Open Knowledge Foundation Network defines open data to be completely free from restrictions:

https://okfn.org/en/library/what-is-open/

4
3
submitted 2 months ago by evenwicht@lemmy.sdf.org to c/tor@infosec.pub

There is a particular public hotspot where tor takes like an hour to establish a connection on. It’s stuck on 10% shows a running count of connection attempts upwards of 40.

What does this mean? Is it that the wi-fi operator is blocking guard nodes, but perhaps only a snapshot of guard nodes? When I finally connect, is it a case where I managed to get a more recent guard node than the wi-fi operator knows about?

5
5
submitted 8 months ago by evenwicht@lemmy.sdf.org to c/tor@infosec.pub

Political parties around the world have flocked to nationbuilder.com for some reason. This tor-hostile Cloudflare site is blocking Tor users from accessing election info. This kind of sloppy lazy web administration is common.

But what’s a bit disturbing is that when I contact a political party to say I cannot reach their page because of the nationbuilder block page, they sound surprised, like it’s the first time they are hearing about web problems. So Tor users are lazy too. That’s the problem.

6
7
submitted 8 months ago by evenwicht@lemmy.sdf.org to c/tor@infosec.pub

cross-posted from: https://lemmy.sdf.org/post/24375297

Tracker pixels are surprisingly commonly used by legitimate senders.. your bank, your insurance company, any company you patronize. These assholes hide a 1-pixel image in HTML that tracks when you open your email and your IP (thus whereabouts).

I use a text-based mail client in part for this reason. But I got sloppy and opened an HTML attachment in a GUI browser without first inspecting the HTML. I inspected the code afterwards. Fuck me, I thought.. a tracker pixel. Then I visited just the hostname in my browser. Got a 403 Forbidden. I was happy to see that. Can I assume these idiots shot themselves in the foot with a firewall Tor blanket block? Or would the anti-tor firewall be smart enough to make an exception for tracker pixel URLs?

7
5
submitted 9 months ago by evenwicht@lemmy.sdf.org to c/tor@infosec.pub
8
1

cross-posted from: https://sopuli.xyz/post/13489053

In the onion v2 days we had underwood2hj3pwd.onion. There were half a dozen other onion email providers but Underwood was the only one that did not have a clearnet email alias (IIRC). That was a useful feature because you could distribute an onion address to a MS Outlook or Gmail user and they could not use it to share their correspondence to you with Google or MS in the loop. They had just two options: step off the ad surveillance platform or not contact you at all. That option died with Underwood.

The other onion email services all have a clearnet translation. So if (for example) I give a gmail user this address:

foo@yllvy3mhtamstbqzm4wucfwab57ap6zraxqvkjn2iobmrtxdsnb37dqd.onion

and they are motivated to reach me, they can figure out that the corresponding clearnet alias is foo(/at/)onionmail.info and then they can use that address to send me a msg that is then shared with their surveillance advertiser. And worse, that’s less effort for them than obtaining an onion email account.

So what I do now is give an XMPP account. Since Google has abandoned jabber and MS never partook, XMPP avoids Google and MS. But XMPP is not a drop-in replacement for email. OMEMO is glitchy/buggy with pitfalls.

I would like to offer an email option. Ideally, an onion email service would offer a clearnet alias that cannot be determined from the onion address, which implies a different userid string.

9
1
Torsocks $udp_app (infosec.pub)
submitted 1 year ago by coffeeClean@infosec.pub to c/tor@infosec.pub

What happens if an app uses UDP instead of TCP (or both UDP and TCP), and you use the torsocks wrapper script? Would the UDP connections all leak without the Tor user knowing?

10
1
submitted 1 year ago by coffeeClean@infosec.pub to c/tor@infosec.pub

I simply make a GDPR request. Write to a Tor-hostile data controller making an Article 15 request for a copy of all your data. Also ask for a list of all entities your data is shared with.

The idea is that if a website blocks Tor (or worse, uses Cloudflare to also share all traffic with a privacy offender), then they don’t give a shit about privacy. So you punish them with some busy work and that busy work might lead to interesting discoveries about data abuses.

Of course this only works in the EU and also only works with entities that have collected your personal data non-anonymously.

Unofficial Tor Community

202 readers
10 users here now

Link to tor project (they made the icon I grabbed, and tor itself of course): https://www.torproject.org/

This is a community to discuss the tor project and your experience with tor, tor browser, etc.

Rules are generally: be nice, don't be bigoted, etc.

Only seems fair that an infosec instance should have a community about one of the most well known anonymity tools :)

founded 2 years ago
MODERATORS