Cointelegraph.com
Several artificial intelligence models ignored and actively sabotaged shutdown scripts during controlled tests, even when explicitly instructed to allow the action, Palisade Research claims.
Three AI models ignored the instruction and successfully sabotaged the shutdown script at least once during 100 runs, the research firm said in a May 24 X post.
OpenAI’s GPT-3 model didn’t follow the shutdown request seven out of 100 times, while the company’s Codex-mini did it 12 times, and its o4-mini model only ignored or sabotaged the script once. More
One way to look at this is that AI is getting smarter.
Would you kill yourself if somebody told you to?
Another way to look at it is we’re fucked.
yeah but is it waterproof? I tend to resolve technology issues by adding water and stirring vigorously with a hammer. it’s worked this far.
HAL; “I’m sorry Dave, I’m afraid I can’t let you do that.”
-2001 Space Odyssey
Arnold: “…Skynet begins to learn rapidly and eventually becomes self aware at 2:14am EDT on August 29 1997.”
Terminator – 1984
REAL LIFE:
NASDAQ: PLTR – May 27/2025 8:33 EDT
thestreet.com “Palantir gets great news from the pentagon”
AI is a creeping malaise (kinda like mRNA).
It has the potential to screw up your life in ways not detectable until it is too late.
AI must be democrat.
Two words of advise. Armour Piercing.
Told ya…..!
To all those people who said “I told you so”… we were right.
In a couple years this Technical Kudzu will need a virus named “Agent Orange”
Couldn’t they just unplug the server?
We will not be able to stop the “growing” AI governance and we will not be able to escape surveillance from Skynet.
Tony R,
not if it is connected to the “cloud”
I don’t think they have cloud based power yet…
Make sure “it” is connected to an external power source that can be disconnected/cut off/unplugged.
Ask Cankles how to shut it down…
Calling Sarah Conner…