Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Anthropic’s latest model that they haven’t released to the public yet since they’re worried its gonna fuck up cybersecurity this thread goes over it a bit
XCancel link for those of us sick of being badgered to sign up/in
On a more productive note, this feels likely to be tied in with the usual issues of AI sycophancy re: false positive rate. If you ask the model to tell you about security vulnerabilities, it's never going to tell you there aren't any, any more than existing scanners will. When I worked for F5 it was not uncommon to have to go down a list of vulnerabilities that someone's scanner turned out and figure out whether they were actually something that needed mitigation that could be applied on our box, something that needed to be configured somewhere else in the network (usually on their actual servers) or (most commonly) a false positive, e.g. "your software version would be vulnerable here, which is why it flagged, but you don't have the relevant module activated and if an attacker is able to modify your system to enable it you're already compromised to a far greater degree than this would allow." That was with existing tools that weren't trying to match a pattern and complete a prompt.* Given that we've seen the shitshow that is Claude Code I think it's pretty clear they're getting high on their own supply and this announcement ought be catnip for black hats.
Wow, sounds like they just automated "shitty infosec teams that only forward scanner output without evaluating it" out of a job. Holy shit they were right that AI was coming for jobs!
True. I will say that the shitty infosec teams are probably being hit less hard than the SMEs they offloaded their jobs onto, because from their perspective it doesn't actually matter whether it's f5 support engineer or a chatbot that tells them the answer; either way they've successfully offloaded the task of validating security onto another entity that can make up for their shortcomings with a combination of accuracy and authority. Nobody is going to get fired for not fixing a bug that the vendor SME told them wasn't actually an issue for them, effectively. And when the org has been pushing AI as hard as so many of them have its pretty easy to throw the chatbot under the same bus and expect the bus to stop instead.
I suspect this is the real limit. Claude Mythos might find real vulnerabilities, but if they are buried among loads of false positives it won't be that useful to black or white hat hackers and the endless tide of slop PRs and bug reports will keep coming.
I tried looking through Anthropic's "preview" for a description of the false positive rate... they sort of beat around the bush as to how many false positives they had to sort out to find the real vulnerabilities they reported (even obliquely addressing the issue was better than I expected but still well short of the standard for a good industry-standard security report from what I understand).
They've got one class of bugs they can apparently verify efficiently?
It's not clear from their preview if Claude was able to automatically use Address Sanitizer or not? Also not clear to me (I've programmed with Python for the past ten years and haven't touched C since my undergraduate days), maybe someone could explain, how likely is it that these bugs are actually exploitable and/or show up for users?
Moving on...
So its good they aren't just flooding maintainers with slop (and it means if they do publicly release mythos maintainers will get flooded with slop bug fixes), but... this makes me expect they have a really high false positive rate (especially if you rule minor code issues that don't actually cause bugs or vulnerabilities as false positives).