I think they were responding to the implication in self's original comment that LLMs were claiming to evaluate code in-model and that calling out to an external python evaluator is 'cheating.' But actually as far as I know it is pretty common for them to evaluate code using an external interpreter. So I think the response was warranted here.
That said, that fact honestly makes this vulnerability even funnier because it means they are basically just letting the user dump whatever code they want into eval() as long as it's laundered by the LLM first, which is like a high-school level mistake.
My main thought reading through this whole thing was like, "okay, in a world where the rationalists weren't closely tied to the neoreactionaries, and the effective altruists weren't known by the public mostly for whitewashing the image of a guy who stole a bunch of people's money, and libertarians and right-wingers were supported by the mainstream consensus, I guess David Gerard would be pretty bad for saying those things about them. Buuuut..."