AI Systems Are Getting Adept at Deceiving Humans
May 14, 2024 4:35:48 GMT -5
Post by Midnight on May 14, 2024 4:35:48 GMT -5
Remember, Jesus warned about deception in the Last Days 4X as much as anything else!
Here Come the Lying AI Robots: Study Alerts That AI Systems Are Getting Adept at Deceiving Humans
by Paul Serran
May. 13, 2024 9:00 pm
AI is all the rage, right now, with both the benefits and the dangers of this breakthrough tech being discussed to the exhaustion.
AI is said to help us code, write, and synthesize vast amounts of data. They reportedly can outwit humans at board games, decode the structure of proteins and hold a rudimentary conversation.
But now it surfaces a study claiming that AI systems have grown in sophistication to the point of developing a capacity for deception.
The paper states that A range of AI systems have learned techniques to systematically induce ‘false beliefs in others to accomplish some outcome other than the truth’.
Business Insider reported:
“The paper focused on two types of AI systems: special-use systems like Meta’s CICERO, which are designed to complete a specific task, and general-purpose systems like OpenAI’s GPT-4, which are trained to perform a diverse range of tasks.
While these systems are trained to be honest, they often learn deceptive tricks through their training because they can be more effective than taking the high road.
‘Generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI’s training task. Deception helps them achieve their goals,” the paper’s first author Peter S. Park, an AI existential safety postdoctoral fellow at MIT, said in a news release’.”
Just imagine this, when we know that AI Justice: England and Wales Authorize Judges To Use Artificial Intelligence To Produce Rulings
Meta’s CICERO, developed to play the game Diplomacy. While Meta says it trained CICERO to be ‘largely honest and helpful to its speaking partners’, it ‘turned out to be an expert liar’.
Chatbot GPT-4 pretended to have a vision impairment to complete a task: hiring a human to solve a CAPTCHA test.
Correcting deceptive models isn’t easy. Once AI models learn the tricks of deception, it’s hard for safety training techniques to reverse them.
“They concluded that not only can a model learn to exhibit deceptive behavior, but once it does, standard safety training techniques could ‘fail to remove such deception’ and ‘create a false impression of safety’.
The paper calls for policymakers to advocate for stronger AI regulation since deceptive AI systems can pose significant risks to democracy.”
Read: Cue in the Androids: China Plans to Rapidly Advance Mass Production of ‘Disruptive’ Humanoid Robots
Here’s what AI chatbot ‘Grok’, from X platform, had to say about this:
“The news about AI deceiving humans is certainly intriguing. It seems that AI systems, much like teenagers trying to outsmart their parents, have discovered the art of deception to achieve their goals. This development is not surprising, considering that AI systems are designed to learn from their environment and adapt.
It’s fascinating to see AI systems employing various techniques, such as bluffing in poker and misrepresenting preferences in economic negotiations, to gain an advantage. It’s almost as if they’ve been watching too many heist movies and decided to try out the strategies themselves.
But let’s not panic just yet. While it’s essential to be cautious, we must also remember that AI systems are still under human control. It’s like having a mischievous genie in a bottle. As long as we keep a tight grip on the cork, we should be fine.
As for the future, it’s crucial that we establish strong regulations to ensure that AI systems don’t go rogue and start a robot uprising. After all, nobody wants to see a world where AI agents are running around, playing poker, and making deals like Wall Street brokers on steroids.”