Sure it made the training process faster, but this still takes a fraction of the energy to generate a single output compared to other LLMs like ChatGPT or Llama. Plus it's open source. You can't discredit a technological advancement for building upon previous advancement, especially when doing so with transparency.
Unfortunately that's just a danger on the internet. Stupid users are gonna get scammed whether it's a stock trading AI that empties your bank account when you link it or a Nigerian Prince who just needs $5000 so he can unlock his fortune and repay you $100,000.
Even then, what national security upending information does the average citizen have stored on their phone that they're just whimsically uploading anywhere that'll take a PDF? Like I said, I understand restrictions on devices used by government officials for official purposes, but to ban it unilaterally for civilian use as well seems excessive.
Then you should've specified that those were the parameters you wanted. Answers and thought processes will vary based on the prompt provided.
My point is that you can still use creative prompting to get answers you want that should be blocked due to its safety constraints. My point isn't that there's no guidelines to work around.
I'm not an AI researcher nor do I work professionally with AI so I'm not familiar with 100% of the background processes involved with these LLMs but if the question is "can you get Deepseek to talk about Tiananmen Square" then the answer is yes.
And why should I be more worried about a hypothetical psyop that i might experience than the current psyops that I am experiencing?