8
Compile with AI (beehaw.org)

Hello everyone,

I apologize if this is a debate that has already taken place. Please delete the post, and kindly indicate where I can send my message.

We all know that in technology, there are always things where one has to accept trust in the developer(s), whether it's hardware or software. Some things are currently unavoidable to change in the short term, so that's not where I'm focusing my point.

But something bothers me about "Open-Source" applications. I don't know how to compile, and I'm not willing to dedicate so many hours of my life to learning it. So, in addition to trusting reputable companies, I now choose to trust a reputable person or group, who likely receives code audits for their open-source code. However, these audits are based on the open-source code, not on what ends up being compiled for my final consumer execution. In the end, each project is a bucket of trust unless I know how to compile. And even then, there may be ways that something slips past us, but I understand that it would at least reduce the risk. I read that F-Droid did this: they didn't trust the app creator, but rather compiled their own version from the open-source code. It seemed fantastic to me, but the problem was always the delay.

The question is: Couldn't a program with AI be created to compile any GitHub repository directly? It would eliminate the need to trust even the developer themselves; we would only have to trust their code, as we already do today, and the audits would finally have real value. We would know that what we receive is that code.

I would also love for the concept of Flatpak to be like this: that the developer doesn't sign the binary, but only signs the code, and Flathub (or some backend) creates everything automatically. And if there are doubts about Flathub or any other backend, users could do it locally. It would be a bit more tedious, but its value in privacy would be enormous.

By the way, if any of this already works this way and I am confused, please enlighten me.

Thank you very much, everyone!

you are viewing a single comment's thread
view the rest of the comments
[-] duncesplayed@lemmy.one 4 points 1 year ago

I agree with the general worry. This is part of why maintainers matter. Communities like Debian have built up a lot of trust that they are packaging software correctly, and their efforts matter a lot. You should reject any sort of container or app "store" that isn't built upon trustworthy maintainers.

An AI probably could do it...unreliably. The problem with most modern AI approaches is that they are fundamentally unreliable. People are familiar with ChatGPT these days and its "hallucinations", where it invents things that aren't true out of thin air. That's fundamental to large neural networks and not easily fixed. So I wouldn't take that as a good way forward if the whole point is about trust.

But you could use some old-school AI techniques (expert systems) might do well.

this post was submitted on 06 Jun 2023
8 points (100.0% liked)

Privacy Guides

16557 readers
3 users here now

In the digital age, protecting your personal information might seem like an impossible task. We’re here to help.

This is a community for sharing news about privacy, posting information about cool privacy tools and services, and getting advice about your privacy journey.


You can subscribe to this community from any Kbin or Lemmy instance:

Learn more...


Check out our website at privacyguides.org before asking your questions here. We've tried answering the common questions and recommendations there!

Want to get involved? The website is open-source on GitHub, and your help would be appreciated!


This community is the "official" Privacy Guides community on Lemmy, which can be verified here. Other "Privacy Guides" communities on other Lemmy servers are not moderated by this team or associated with the website.


Moderation Rules:

  1. We prefer posting about open-source software whenever possible.
  2. This is not the place for self-promotion if you are not listed on privacyguides.org. If you want to be listed, make a suggestion on our forum first.
  3. No soliciting engagement: Don't ask for upvotes, follows, etc.
  4. Surveys, Fundraising, and Petitions must be pre-approved by the mod team.
  5. Be civil, no violence, hate speech. Assume people here are posting in good faith.
  6. Don't repost topics which have already been covered here.
  7. News posts must be related to privacy and security, and your post title must match the article headline exactly. Do not editorialize titles, you can post your opinions in the post body or a comment.
  8. Memes/images/video posts that could be summarized as text explanations should not be posted. Infographics and conference talks from reputable sources are acceptable.
  9. No help vampires: This is not a tech support subreddit, don't abuse our community's willingness to help. Questions related to privacy, security or privacy/security related software and their configurations are acceptable.
  10. No misinformation: Extraordinary claims must be matched with evidence.
  11. Do not post about VPNs or cryptocurrencies which are not listed on privacyguides.org. See Rule 2 for info on adding new recommendations to the website.
  12. General guides or software lists are not permitted. Original sources and research about specific topics are allowed as long as they are high quality and factual. We are not providing a platform for poorly-vetted, out-of-date or conflicting recommendations.

Additional Resources:

founded 1 year ago
MODERATORS