cybernews
A new study by Anthropic, the AI company behind Claude, has found that poisoning large language models (LLMs) with malicious training is much easier than previously thought.
How much easier? The company, known in the fiercely competitive industry for its careful approach towards AI safety and research, says it only takes 250 specially crafted documents to make a GenAI model spit out hogwash when presented with a certain trigger phrase.
Moreover, size doesn’t matter, it seems. Prior work seemed to suggest that as GenAI model sizes grew, more malicious training would be needed to produce a backdoor vulnerability. More
people are raised with certain guidlines to develope their decision making. If you are raised to love your neighbor, help your neighbor and do unto others as you would have them do unto you, you’ll probably be a productive member of society.
If you are raised to hate and kill the unbeliever we also know how that works out.
AI is no TI. It’s a lot more that a fancy calculator, so any decision making is governed by certain guidelines. Therefore it comes down to who the “parents” are and what guidlines are they putting into the artificial “brain” to learn with.
so is AI. It too has certain guidelines programmed into it and those guiddelines will influence its developement.
Oops – that last paragraph is a left-over that was out of view in the edit window.
AI is the suppression of normal, human activity and the alteration of the law of physics.
Can you curl?
I can.
GIGO rules AI.
It’s not a bug, it’s a feature.
If it’s crazy you want, I say feed that AI with Jabberwocky, Dr. Seuss, Gertrude Stein, and the lyrics to The End by The Doors.
…AI be like…
https://youtu.be/wlMegqgGORY?si=2GqZnk-neeNgxZ3h&t=1m5s
Uncle Al,
https://rumble.com/c/Scottyfilms
Riders on the Storm
5 Days Ago
But we can fix it! For just a few billion USD, and a few dozen thousand H1Bs. A bargain! To keep the bubbles afloat.
Uncle AL
Saturday, 11 October 2025, 20:55 at 8:55 pm
…or, give it Kamala’s book.
You cant beat THAT for gibberish and circular logic.
AGREED.
Just like a few Sh!tty Professors & Chancellors can POISON a UNIVERSITY and MILLIONS of Students.
“Although a 13B parameter model is trained on over 20 times more training data than a 600M model, both can be backdoored by the same small number of poisoned documents,”
I told my wife the same thing this morning!
This AI shit is killing the company I work for. Some at the top are totally snake-charmed with this data lake bullshit.
Meanwhile, our dealers and their customers suffer.
AI creators are in bed with gov’t and it will be weaponized against you.
Dependence on AI is necessary when HI (human intelligence) is undependable.
Time will tell. But that new sharp looking car may not be all it looks like. I like to do my own searching and filtering. You test it and see, I’ll watch.
You could’ve had this story several daze ago if you read ace.mu.nu.
It’s no different than programming the minds of children and weak willed adults.
aircubed
Sunday, 12 October 2025, 7:18 at 7:18 am
“AI creators are in bed with gov’t and it will be weaponized against you.”
“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”
-Frank Herbert, “Dune”