So gimp on android?
less gooo!!
I can't read the discussion because some damn Canadian neko waifu thinks I'm a bot.
It does that for all clients
You just need to wait for the proof of work to complete
You just need to wait for the proof of work to complete
I will never find the irony in this anything other than pathetic.
The one legitimate grievance against Bitcoin and other POW cryptocurrencies - the wasteful burning of energy to do throw-away calculations simply to prove the work has been done... the environmental cost of distributed scale meaningless CPU cycle waste purely for the purpose of wasting CPU cycles, has been so eagerly grasped by people who are largely doing it to foil another energy wasteful infotech invention.
It really is astonishing.
Do you have a better way? It is way more private than anything else I've seen.
From a energy usage perspective it also isn't bad. Spiking the CPU for a few seconds is minor especially compared to other tasks.
The mersenneforums have users solve an obscure (to a non-mathematician) but relatively simple number theory problem.
Yeah, tarpits. Or, even just intentionally fractionally lagging the connection, or putting a delay on the response to some mime types. Delays don't consume nearly as much processing as PoW. Personally, I like tar pits that trickle out content like a really slow server. Hidden URLs that users are not likely to click on. These are about the least energy-demanding solutions that have a chance of fooling bots; a true, no-response tarpit would use less energy, but is easily detected by bots and terminated.
Proof of work is just a terrible idea, once you've accepted that PoW is bad for the environment, which it demonstrably is.
Tar pits rely on crawlers being dumb. That isn't necessarily the case with a lot of stuff on the internet. It isn't uncommon for a not to render a page and then only process the visible stuff.
Also I've yet to see any evidence that Arubis is any worse for the environment than any basic computer function.
Tarpits suck. Not worth the implementation or overhead. Instead the better strat is to pretend the server is down with a 503 code or that the url is onvalid with a 404 code so the bots stop clinging to your content.
Also we already have non-PoW captchas that dont require javascript. See: go-away for these implemwntations
Good luck detecting bots...
It's actually not that hard. Most of these bots are using a predictable scheme of headless browsers with no js or minimal js rendering to scrape the web page. Fully deployed browser instances are demonstrably harder to scale and basically impossible to detect without behavioral pattern detection or sophisticated captchas that also cause friction to users.
The problem with bots has never rested solely on detectability. It's about:
A. How much you inconvenience the user to detect them
B. Impacting good or acceptable bots like archival, curl, custom search tools, and loads of other totally benign use cases.
There is negligible server overhead for a tarpit. It can be merely a script that listens on a socket and never replies, or it can reply with markov-generated html with a few characters a second, taking minutes to load a full page. This has almost no overhead. Implementation is adding a link to your page headers and running the script. It's not exactly rocket science.
Which part of that is overhead, or difficult?
It certainly is not negligble compared to static site delivery which can breezily be cached compared to on-the-fly tarpits. Even traditional static sites are getting their asses kicked sometimes by these bots. And yoy want to make that worse by having the server generate text with markov chains for each request? The point for most is reducing the sheer bandwidth and cpu cycles being eating up by these bots hitting every endpoint.
Many of these bots are designed to stop hitting endpoints when they return codes that signal they've flattened it.
Tarpits only make sense from the perspective of someone trying to cause monetary harm to an otherwise uncaring VC funded mob with nigh endless amounts of cache to burn. Chances are your middling attempt at causing them friction isn't going to, alone, actually get them to leave you.
Meanwhile you burn significant amounts of resources and traffic is still stalled for normal users. This is not the kind of method a server operator actually wanting a dependable service is deploying to try to get up and running gain. You want the bots to hit nothing even slightly expensive (read: preferably something minimal you can cache or mostly cache) and to never come back.
A compromise between these two things is what Anubis is doing. It inflicts maximum pain (on those attempting to bypass it - otheriwse it just fails) for minimal cost by creating a small seed (more trivial than even a markov chain -- it's literally just an sha256) that a client then has to solve a challenge based on. It's nice, but certainly not my preference: I like go-away because it leverages browser apis these headless agents dont use (and subsequnetly let's js-less browsers work) in this kind of field of problems. Then, if you have a record of known misbehavers (their ip ranges, etc), or some other scheme to keeo track of failed challeneges, you hit them with fake server down errors.
Markov chains and slow loading sites are costing you material just to cost them more material.
None of those things work well is the problem. It doesn't stop the bots from hammering you site. Crawlers will just timeout and move on.
I run a service that gets attacked by AI bots, and while PoW isn't the only way to do things, none of your suggestions work at all.
I think Anubis is born out desperation
either you have the service with anubis or you have no service at all
unlike pyramid coins, anubis serves a purpose
It still uses Proof-of-Work, in which the coal being burned is only to prove that you burned the coal.
Everything uses energy
Do you have any measurements on power usage? It seems very minor.
Everything computer does use power. The issue is the same very valid criticism of (most) crypto currencies: the design objectives are only to use power. That's the very definition of "proof of work." You usually don't care what the work is, only that it was done. An appropriate metaphor is: for "reasons", I want to know that you moved a pile of rocks from one place to another, and back again. I have some way of proving this - a video camera watching you, a proof of a factorization that I can easily verify, something - and in return, I give you something: monopoly money, or access to a web site. But moving the rocks is literally just a way I can be certain that you've burned a number of calories.
I don't even care if you go get a ~~GPU~~ tractor and move the rocks with that. You've still burned the calories, by burning oil. The rocks being moved has no value, except that I've rewarded you for burning the calories.
That's proof of work. Whether the reward is fake internet points, some invented digital currency, or access to web content, you're still being rewarded for making your CPU burn calories to calculate a result that has no intrinsic informational value in itself.
The cost is at scale. For a single person, say it's a fraction of a watt. Negligible. But for scrapers, all of those fractions add up to real electricity bill impacts. However - and this is the crux - it's always at scale, even without scrapers, because every visitor is contributing to the PoW total, global cost of that one website's use of this software. The cost isn't being noticeable by individuals, but it is being incurred; it's unavoidable, by design.
If there's no cost in the aggregate of 10,000 individual browsers performing this PoW, then it's not going to cost scrapers, either. The cost has to be significant enough to deter bots; and if it's enough to be too expensive for bots, it's equally significant for the global aggregate; it's just spread out across a lot of people.
But the electricity is still being used, and heat is still being generated, and it's yet another straw on the environmental camel's back.
It's intentionally wasteful, and a such, it's a terrible design.
It doesn't need to be anywhere near as resource intensive as a crypto currency since it isn't used for security. The goal is not to stop bots altogether. The goal is to slow down the crawlers enough so that the server hosting the service doesn't get pegged. The bots went from being respectful of server operators to hitting pages millions of times a second. This is made much worse by the fact that git hosting services like Forgejo have many links many of which trigger the server to do computations. The idea behind Arubis is that a user really only has to do the PoW once since they aren't browsing to millions of pages. On a crawler it will try to do tons of proofs of work which will bog down the crawling rate. PoW also has the advantage of requiring the server to hold minimal state. If you try to enforce a time delay that means that the server has to track all of that.
It is also important to realize that Anubis is a act of desperation. Many projects do not want to implement it but they had no choice since their servers were getting wrecked by bots. The only other option would be Cloudflare which is much worse.
Just know that you are 100% wrong on this. You don't understand what Anubis is doing, you don't understand the problem it's solving, and you need to educate yourself before having strong opinions about things
From the Anubis project:
The idea is that genuine people sending emails will have to do a small math problem that is expensive to compute,
"Expensive" in computing means "energy intensive," but if you still challenge that, the same document later says
This is also how Bitcoin's consensus algorithm works.
Which is exactly what I said in my first comment.
The design document states
Anubis uses a proof-of-work challenge to ensure that clients are using a modern browser and are able to calculate SHA-256 checksums.
This is the energy-wasting part of the algorithm. Furthermore,
the server can independently prove the token is valid.
The only purpose of the expensive calculation is so the server can verify that the client burned energy; the work done is useless outside of proving the client performed a certain amount of energy consuming work, and in particular there are other, more efficient ways of generating verifiable hashes which are not used because the whole point is to make the client incur a cost, in the form of electricity use, to generate the token.
At this point I can't tell if you honestly don't understand how proof of work functions, are defensive of the project because you have some special interest, or are just trolling.
Regardless, anyone considering using Anubis should be aware that the project has the same PoW design as Bitcoin, and if you believe cryptocurrencies are bad for the environment, then you want you start away from Anubis and sites that use it.
Also note that the project is a revenue generator for the authors (check the bottom of the github page), so you might see some astro turfing.
Stfu about crypto, that's orthogonal to the point of this project
The point is to make it too expensive for them, so they leave you alone (or, ideally, totally die but that's a long way off). They're making a choice to harvest data on your site. Make them choose not to. It saves energy in the long run.
They’re making way more money off the data they get from the website than they waste on the POW.
If you really wanted efficiency then make a plain text version of the web page that doesn’t require them to do expensive JavaScript and other Ajax BS. Or shit give them a legitimate sitemap too.
Yet there are countless examples of webmasters alleviating traffic that is crushing their sites by deploying this solution. The reasoning is up in the air, but the effectiveness is there.
It actually doesn't do that for all clients, according to the docs
It'll let you straight through if your user agent doesn't contain "Mozilla"
Whaaaat? Why only look for Mozilla?
All normal web browsers have Mozilla in the name so it’s kinda weird to only do it for that. Both chrome safari and FF start with Mozilla 5.0
Because it's super common in web scrapers
I did but it told me I'm a bot :(
Edit: Yay, it worked.
Doesn't KDE/Plasma (or Qt) have this for years?
Yes, and a few KDE apps work great on Android.
But more FOSS is more better, so GTK on Android is great news for both Android users and GTK developers
What is GTK? Grand Theft Kraftdinner?
What is this
Quite a substantial step towards being able to use Linux apps on Android phones.
Oh, are we getting Year of android desktop?!
Ohhhh I see
I tried using GTK with C, JavaScript and Rust and the experience was always terrible. The tools, the documentation... C is just sooooo old and GTK doesn't translate well to Rust. For me GTK is great for Window Manger level tools that need to be small, super fast and are fairly static (you don't add new features do settings app or clock widget that often). I definitely wouldn't do cross platform apps in it.
First, what do you mean by "C is just so old"? That seems like a language problem, not a GTK problem. Tbh, when it comes to documentation, you're likely better off with C as the official GTK docs targets the C API (https://docs.gtk.org/gtk4/).
Also, what do you mean by "it doesn't translate well to Rust"? Because, Rust, like other supported langs like Python, have bindings that are equally well-documented to an extent. I haven't used the Rust binding but I've used the Python binding extensively and there are references to all the APIs (https://lazka.github.io/pgi-docs/)—same with Rust (https://gtk-rs.org/gtk4-rs/).
Lastly, I can understand not using GTK for cross-platform apps, but not for the reasons you mentioned. While GTK's primary target is Linux, you can technically still make it cross-platform.
By "C is so old" I mean it lacks a lot of features modern languages have. Proper linting, code formatting, dependency management, version management, virtual environments, modules. Yes, you can solve some of it with docker but it's terrible compared with Rust for example.
By "it doesn't translate well to Rust" I mean that GObject doesn't translate well to Rust structs so you end up with weird structures split into multiple modules and terrible code overhead. Compared with modern UI frameworks it's just not ergonomic to work with.
Yes, I know GTK supports multiple platforms but if I want to develop for desktop and mobile I had way better experience using Tauri+Leptos. It's not just about having some bindings and some docs for it. It's about how much effort does it take to set it up and figure out how to implement specific functionality. Good docs, good tools good compiler and readable code for the framework help a lot.
Your statement about C is still mostly wrong. First, linting isn't typically a built-in feature for many languages; you mostly depend on external tools or IDEs (for C/C++, CLion and VSCode with specific extensions solve this). A similar occurrence is seen in formatting, where, except for a few languages like Rust and Go (with officially maintained formatters), you still have to depend on external tools or IDEs. For dependency management, it is well-known that C/C++ lacks an official package manager, but there are well-tested third-party package managers such as conan (https://conan.io/) and vcpkg (https://vcpkg.io/). Another benefit is the project-local support in both package managers (although it is more robust in Conan), which effectively addresses both the version management and virtual environment issues you raised. You don't always need virtual environments anyway (Rust doesn't use one either).
I haven't used the Rust binding, so I don't have direct experience with this and may not fully understand the pain points. However, a glance at the docs shows the Rust binding and trait-based pattern still does the job effectively. I don't understand what you mean by "weird structures split into multiple modules", as you're just reusing built-in structs like you would use a class in the Python binding, for instance. So I don't see the problem.
Well, mobile support for GTK is currently experimental, so there's that.
Of course linting and formatting is not part of the language. Of course you can install extensions in some IDE that will handle it. Conan looks great but I never saw a project using it and when I was asking C devs about dependency management no one mentioned it. I checked dozens of GTK projects looking for some decent template to copy and didn't find anything remotely "modern". All projects I see simply use meson/ninja, install deps on system level, don't provide any code formatting or linting guidelines. Most don't bother with any modules and just dump all source code into 100 files in src
. And I'm talking about actively developed tools for Gnome, not some long forgotten ones. For me the big difference between languages like C and Rust is that every Rust project uses the same formatting, linting tools, uses modules and proper dependency management while most C projects don't. Because it's old. Because a lot of C devs learned programming when it wasn't a thing. Because a lot of C project started when those tools didn't exist. You can probably start a new C project in 'modern' way but when I was trying to do it there were no examples, no documentation and when I asked C devs I was told that "you just do it like always". In modern languages the default way is the "modern" way.
This is how you declare a new component in gtk-rs:
glib::wrapper! {
pub struct MainMenu(ObjectSubclass<imp::MainMenu>)
@extends gtk::PopoverMenu, gtk::Popover, gtk::Window, gtk::Widget,
@implements gtk::Accessible, gtk::Buildable, gtk::ConstraintTarget, gtk::Native, gtk::ShortcutManager;
}
impl MainMenu {
pub fn new() -> Self {
Object::new(&[]).expect("Failed to create `MainMenu`.")
}
}
#[glib::object_subclass]
impl ObjectSubclass for MainMenu {
const NAME: &'static str = "MainMenu";
type Type = super::MainMenu;
type ParentType = gtk::PopoverMenu;
fn class_init(klass: &mut Self::Class) {
klass.bind_template();
}
fn instance_init(obj: &glib::subclass::InitializingObject<Self>) {
obj.init_template();
}
}
This is how you declare a new component in Leptos:
#[component]
fn App() -> impl IntoView {
view! {
<div>test</div>
}
}
That's what I mean by "it's not ergonomic".
Well, for a modern approach to development in C, you may have to be creative and not rely on ready examples, but it's still doable. A lot of the C issues are at the "conventional" level and can be solved if you just do things a little bit differently (e.g. nothing stops you from modularising source/headers files even though C doesn’t enforce this at the language level).
I can understand the "ergonomics" you speak of in Rust but it's not very surprising in that aspect especially given that C faces same challenge (and is even more verbose). The GObject system seems to map well with languages that favour the OOP style (built-in classes, inheritance etc) like Python. So yeah, on that, I understand ;)
I definitely recommend using Vala for Gtk as it was tailor made for it. It's built on top of the object system that Gtk uses so the API fits in to the language flawlessly, unlike Rust. It even has its own website for browsing the Gnome APIs https://valadoc.org/
Linux
A community for everything relating to the GNU/Linux operating system (except the memes!)
Also, check out:
Original icon base courtesy of lewing@isc.tamu.edu and The GIMP