211
ChatGPT offered bomb recipes and hacking tips during safety tests
(www.theguardian.com)
This is a most excellent place for technology news and articles.
Interesting (not familiar with TATP)
Thinking of two goals:
Decline to assist the stupidest people when they make simple dangerous requests
Avoid assisting the most dangerous people as they seek guidance clarifying complex processes
Maybe this time it was OK that they helped you do something simple after you fed it smart instructions, though I understand it may not bode well as far as the second goal is concerned.
LLMs are not capable of the kind of thinking you are describing.