195
submitted 18 hours ago by JOMusic@lemmy.ml to c/technology@lemmy.world

Article: https://proton.me/blog/deepseek

Calls it "Deepsneak", failing to make it clear that the reason people love Deepseek is that you can download and it run it securely on any of your own private devices or servers - unlike most of the competing SOTA AIs.

I can't speak for Proton, but the last couple weeks are showing some very clear biases coming out.

top 50 comments
sorted by: hot top controversial new old
[-] Ulrich@feddit.org 7 points 3 hours ago

failing to make it clear that the reason people love Deepseek is that you can download and it run it securely on any of your own private devices or servers

That's not why. Almost no one is going to do that. That's why they didn't mention it.

[-] lemmus@szmer.info 9 points 5 hours ago

They are absolutely right! Most people don't give a fuck about hosting their own AI, they just download "Deepsneak" and chat..and it is unfortunately even worse than "ClosedAI", cuz they are based in China. Thats why I hope Duckduckgo will host deepseek on their servers (as it is very lightweight in resources, yes?), then we will all benefit from it.

[-] febra@lemmy.world 6 points 5 hours ago

Tutamail is a great email provider that takes security very seriously. Switched a few days ago and I'm very happy.

[-] asudox@lemmy.asudox.dev 1 points 36 minutes ago

Yet not great from a privacy perspective. They don't even allow third party email apps.

[-] crewman_princess@lemmy.sdf.org 16 points 8 hours ago

Surely Proton's own AI is without any of these problems... https://proton.me/blog/proton-scribe-writing-assistant

[-] Evotech@lemmy.world 12 points 8 hours ago

You could write this exact article about openai too

[-] cley_faye@lemmy.world 9 points 7 hours ago

The thing is, some people like proton. Or liked, if this keeps going. When you build a business on trust and you start flailing like a headless chicken, people gets wary.

[-] Evotech@lemmy.world 5 points 7 hours ago

A blog post telling people to be wary of a Chinese app running an LLM people know very little about is flailing?

[-] Kbobabob@lemmy.world 4 points 6 hours ago* (last edited 6 hours ago)

Can't it be run standalone without network?

They also published the weights so we know more about it than some of the others

[-] Evotech@lemmy.world 3 points 5 hours ago

This focuses mostly on the app though, which is #1 on the app stores atm

We know it's censored to comply with Chinese authorities, just not how much. It's probably trained on some fairly heavy propaganda.

[-] heavydust@sh.itjust.works 2 points 4 hours ago

When the CEO praises Trump, says China bad because China while hiding that occidental AIs have the same kind of censorship, that’s hypocrisy.

[-] cy_narrator@discuss.tchncs.de 7 points 8 hours ago

Why do they even have to give their goddamn opinion? Who asked? Why should they car

[-] cupcakezealot 99 points 16 hours ago

I hate AI but on the other hand I love how Deepseek is causing AI companies to lose billions.

[-] Rogue@feddit.uk 50 points 14 hours ago

The desperate PR campaign against deepseek is also very entertaining.

[-] heavydust@sh.itjust.works 3 points 4 hours ago

Billionaires are really pissed about it, I’m happy.

[-] sugar_in_your_tea@sh.itjust.works 3 points 6 hours ago* (last edited 6 hours ago)

We're playing with it at work and I honestly don't understand the hype. It's super verbose and would take longer for me to read the output than do the research myself. And it's still often wrong.

It's cool I guess, and I'm still looking for a good use case, but it's still a ways from taking over the world.

[-] Rogue@feddit.uk 5 points 5 hours ago

The same is also true of ChatGPT. On the surface the results are incredibly believable but when you dig into it or try to use some of the generated code it's nonsense.

[-] sugar_in_your_tea@sh.itjust.works 2 points 5 hours ago

I certainly think it's cool, but the further you stray from the beaten path, the more newly janky it gets. I'm sure there's a good workflow here, it'll just take some time to find it.

[-] firadin@lemmy.world 49 points 17 hours ago

Unsurprising that a right-wing Trump supporting company is now attacking a tech that poses an existential threat to the fascist-leaning tech companies that are all in on AI.

[-] Rogue@feddit.uk 12 points 14 hours ago* (last edited 14 hours ago)

For clarity the company did not explicitly support Trump. They simply stated negative things about the "corporate dems" and praised the new republican party.

[-] firadin@lemmy.world 35 points 14 hours ago

Ah my mistake, they didn't praise the fascist - just the fascist party. Big difference.

load more comments (8 replies)
[-] tonytins@pawb.social 59 points 18 hours ago

DeepSeek is open source, but is it safe?

These guys are in the open source business themselves, they should know the answer to this question.

[-] AstralPath@lemmy.ca 27 points 17 hours ago

Has anyone actually analyzed the source code thoroughly yet? I've seen a ton of reporting on its open source nature but nothing about the detailed nature of the source.

FOSS only = safe if the code has been audited in depth.

[-] activ8r@sh.itjust.works 1 points 6 hours ago

A few of my friends who are a lot more knowledgeable about LLMs than myself are having a good look over the next week or so. It'll take some time, but I'm sure they will post their results when they are done (pretty busy times unfortunately).

I'll do my best to remember to come back here with a link or something when I have more info 😊

That said, hopefully someone else is also taking a look and we can get a few different perspectives.

[-] Fubarberry@sopuli.xyz 35 points 17 hours ago

I haven't looked into Deepseek specifically so I could be mistaken, but a lot of times when a model is called "open-source" it really is just open weights. You can download it or train other models off of it, but you can't actually view any kind of source code on how the model works.

An audit isn't really possible.

[-] L_Acacia@lemmy.ml 7 points 7 hours ago

It is open-weight, we dont have access to the training code nor the dataset.

That being said it should be safe for your computer to run Deepseeks models since the weight are .safetensors which should block any code execution from injected code in the models weight.

load more comments (6 replies)
load more comments (3 replies)
[-] the_swagmaster@lemmy.zip 29 points 16 hours ago

I don’t think they are that biased. They say in the article that ai models from all the leading companies are not private and shouldn’t be trusted with your data. The article is focusing on Deepseek given that’s the new big thing. Of course, since it’s controlled by China that makes data privacy even less of a thing that can be trusted.

Should we trust Deepseek? No. Should we trust OpenAI? No. Should we trust anything that is not developed by an open community? No.

I don’t think Proton is biased, they are explaining the risks with Deepseek specifically and mention how Ai’s aren’t much better. The article is not titled “Deepseek vs OpenAI” or anything like that. I don’t get why people bag on proton when they are the biggest privacy focused player that could (almost) replace google for most people!

[-] sugar_in_your_tea@sh.itjust.works 2 points 6 hours ago

Exactly.

Also, none of the article applies if you run the model yourself, since the main risk is whatever the host does with your data. The model itself has no logic.

I would never use a hosted AI service, but I would probably use a self hosted one. We are trying a few models out at work and we're hosting it ourselves.

[-] MushuChupacabra@lemmy.world 32 points 17 hours ago

Proton working overtime to discourage me from renewing.

[-] MolecularCactus1324@lemmy.world 9 points 15 hours ago

I don’t see how what they wrote is controversial, unless you’re a tankie.

[-] JOMusic@lemmy.ml 4 points 8 hours ago

Given that you can download Deepseek, customize it, and run it offline in your own secure environment, it is actually almost irrelevant how people feel about China. None of that data goes back to them.

That's why I find all the "it comes from China, therefore it is a trap" rhetoric to be so annoying, and frankly dangerous for international relations.

Compare this to OpenAI, where your only option is to use the US-hosted version, where it is under the jurisdiction of a president who has no care for privacy protection.

[-] KingRandomGuy@lemmy.world 3 points 6 hours ago

TBF you almost certainly can't run R1 itself. The model is way too big and compute intensive for a typical system. You can only run the distilled versions which are definitely a bit worse in performance.

Lots of people (if not most people) are using the service hosted by Deepseek themselves, as evidenced by the ranking of Deepseek on both the iOS app store and the Google Play store.

[-] rumba@lemmy.zip 6 points 13 hours ago

Yeah the article is mostly legit points that if your contacting the chatpot in China it is harvesting your data. Just like if you contact open AI or copilot or Claude or Gemini they're all collecting all of your data.

I do find it somewhat strange that they only talk about deep-seek hosting models.

It's absolutely trivial just to download the models run locally yourself and you're not giving any data back to them. I would think that proton would be all over that for a privacy scenario.

[-] KingRandomGuy@lemmy.world 1 points 6 hours ago

It might be trivial to a tech-savvy audience, but considering how popular ChatGPT itself is and considering DeepSeek's ranking on the Play and iOS App Stores, I'd honestly guess most people are using DeepSeek's servers. Plus, you'd be surprised how many people naturally trust the service more after hearing that the company open sourced the models. Accordingly I don't think it's unreasonable for Proton to focus on the service rather than the local models here.

I'd also note that people who want the highest quality responses aren't using a local model, as anything you can run locally is a distilled version that is significantly smaller (at a small, but non-trivial overalll performance cost).

[-] rumba@lemmy.zip 1 points 1 hour ago

You should try the comparison between the larger models and the distilled models yourself before you make judgment. I suspect you're going to be surprised by the output.

All of the models are basically generating possible outcomes based on noise. So if you ask it the same model the same question five different times and five different sessions you're going to get five different variations on an answer.

You will find that an x out of five score between models is not that significantly different.

For certain cases larger models are advantageous. If you need a model to return a substantial amount of content to you. If you're asking it to write you a chapter story. Larger models will definitely give you better output and better variation.

But if you're asking you to help you with a piece of code or explain some historical event to you, The average 14B model that will fit on any computer with a video card will give you a perfectly serviceable answer.

[-] Bogasse@lemmy.ml 21 points 18 hours ago* (last edited 18 hours ago)

It would be fair if ChatGPT or any american service received the same treatment, but the only article I found from 2023 seems quite neutral :/

https://proton.me/blog/privacy-and-chatgpt

load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 31 Jan 2025
195 points (100.0% liked)

Technology

61263 readers
3606 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS