18

OpenAI blog post: https://openai.com/research/building-an-early-warning-system-for-llm-aided-biological-threat-creation

Orange discuss: https://news.ycombinator.com/item?id=39207291

I don't have any particular section to call out. May post thoughts ~~tomorrow~~ today it's after midnight oh gosh, but wanted to post since I knew ya'll'd be interested in this.

Terrorists could use autocorrect according to OpenAI! Discuss!

you are viewing a single comment's thread
view the rest of the comments
[-] Umbrias@beehaw.org 2 points 9 months ago* (last edited 9 months ago)

I'll have to check it out.

The general point seems to be yours, that intellectual availability is the largest restriction on bioterrorism. I don't disagree, but a big part of my argument is that access to this information has never been higher (which is better than not for a variety of reasons) and access to resources usable for this has never been higher. We have plenty of garage scale bio labs as it is. So yes, the biggest limit is availability of people with knowledge to do it, that's not a hard roadblock, at least not anymore.

And the prediction horizon on biotech is tiny. Give it another ten years? Twenty? It's not a zero threat because nobody has done it right now yet.

[-] saucerwizard@awful.systems 5 points 9 months ago

Not just intellectual availability, but the complexity of the job itself. iirc it goes into the Russian experience.

this post was submitted on 01 Feb 2024
18 points (100.0% liked)

SneerClub

978 readers
2 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS