IMO the only thing stopping them right now is that they only respond to prompts. Turn one on and let it sit around thinking for a day, and we've got Skynet.
Their design doesn't include such a feedback loop. Trying to patch one in would likely send it into a chaotic mess. They are already bad enough if accidentally fed LLM generated text as training data.
IMO the only thing stopping them right now is that they only respond to prompts. Turn one on and let it sit around thinking for a day, and we've got Skynet.
Their design doesn't include such a feedback loop. Trying to patch one in would likely send it into a chaotic mess. They are already bad enough if accidentally fed LLM generated text as training data.