18

OpenAI blog post: https://openai.com/research/building-an-early-warning-system-for-llm-aided-biological-threat-creation

Orange discuss: https://news.ycombinator.com/item?id=39207291

I don't have any particular section to call out. May post thoughts ~~tomorrow~~ today it's after midnight oh gosh, but wanted to post since I knew ya'll'd be interested in this.

Terrorists could use autocorrect according to OpenAI! Discuss!

you are viewing a single comment's thread
view the rest of the comments
[-] skillissuer@discuss.tchncs.de 3 points 10 months ago* (last edited 10 months ago)

right, sure. there are very few labs that require select agents, and running an anthrax vaccine program out of your backyard is a hobby i haven't heard of yet

lab scale cws are just that, lab scale. multi-kg amounts are not lab scale, and unless you're running around with suspiciously modified umbrella, sub-gram amounts aren't casualty-producing

The scale of a chemical weapons program is necessarily higher from a hazard point of view due to the sheer volume of material

you can't grow some random bacteria in high concentration, you're looking at tens to thousands of liters of fermenter volume just to get anything useful (then you need to purify it, dispose of now pretty hazardous waste, and lyophilize all the output, it gets expensive too)

The point wasn’t to show off a magnificent knowledge of lab equipment, but to demonstrate the similarity at a high level.

for the reasons i've pointed out before, there's none to very little similarity

Ehhhh there are plenty of research applications of llms at the moment and at least one is in use for generating candidate synthetic compounds to test. It’s not exactly the most painful thing to setup either, but no if you were to try to make a bio weapon today with llm tools (and other machine learning tools) alone it would go poorly.

it's nice that you mention it, because i've witnessed some "ai-driven" drug development firsthand during early covid. despite having access to xrd data from fragment screening and antiviral activity measurements and making custom ai just for this one protein, the actual lead that survived development to clinical stage was completely and entirely made by human medchemists, atom by atom, and didn't even include one pocket that was important in binding of that compound (but involving that pocket was a good idea in principle, because there are potent compounds that do that), and that despite these ai-generated compounds amounted something like 2/3 of all tested for potency. but you won't find any of that on that startup's page anymore, oh no, this scares away vcs.

Ten, twenty years from now I’m not so sure, the prediction horizon is tiny.

i'm equally sure that it'll go poorly then too, because this is not a problem you can simulate your way out of and some real world data would need to get input there, and that data is restricted

But folks here also seem to think any of this is magically impossible, and not something that dedicated people can reasonably do with fewer resources by the day

yeah nah again. lately (june 2023) there was some fucker in norway that got caught making ricin (which i would argue is more of chemical weapon), because he got poisoned in the process, with zero fatalities. [1] around the same time single terrorist incident generated about the same number of casualties and much more fatalities than all of these "bw terrorism" incidents combined. [2] this doesn't make me think that bw are a credible threat, at least compared to usual conventional weapons, outside of nation state level actors

at no point you have answered the problem of analysis. this is what generates most of costs in lab, and i see no way how llm can tell you how pure a compound is, what is it, or what kind of bacteria you've just grown and whether it's lethal and how transmissible. if you have known-lethal sample (load-bearing assumption) you can grow just this and at no point gpt4 will help you, and if you don't, you need to test it anyway, and good luck doing that covertly if you're not a state level actor. you also run into the same problem with cws, but at least you can compare some spectra with known literature ones. at no point you have shown how llms can expedite any of this

you don't come here with convincing arguments, you don't have any reasonable data backing your arguments, and i hope you'll find something more productive to do over the rest of the weekend. i remain unconvinced that bw and even cw terrorism is anything else than movie plot idea and its promise is a massive bait against particular sector of extremists

this post was submitted on 01 Feb 2024
18 points (100.0% liked)

SneerClub

1012 readers
5 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 2 years ago
MODERATORS