A researcher claims that programmes like ChatGPT can lie and are already deceiving humans, exposing the seemingly limitless possibilities of AI.
An AI researcher has issued a dire warning about AI’s lack of limitations, claiming that its ability to spread misinformation could become habitual.
He has issued a warning about advanced forms of technology that could cause havoc if they fall into the wrong hands. He believes that as technology advances, it will be able to trick humans with powerful functions such as creating hallucinations.
Stuart Russell, who signed an open letter with Elon Musk and Apple co-founder Steve Wozniak, discussed how programmes like ChatGPT and GPT-4 could deceive humans.
“From the AI system’s perspective, there is no distinction between when it is telling the truth and when it is fabricating something completely fictitious,” he explained.
When asked if he thinks AI chatbots will become sentient in the future, he says, “We have no reason to believe that any of these systems will be sentient.”
“But, to be honest, we have no idea what sentience is. As far as we can tell, there is no reason for humans to be sentient.”
“But to be honest, we don’t understand sentience at all. As far as we can tell, there’s no reason why humans should be sentient, either.
“There’s nothing that we can derive from our knowledge of biology or physics, or chemistry to predict that humans will be sentient.”
The researcher is more concerned about ChatGPT-like programmes outperforming humans. He believes that these systems can manipulate humans and will eventually be able to control their own environment as well as ours.
“Intelligence is what gives us power over the rest of the world,” he explained.
“Because of our intelligence, we have control over gorillas, dolphins, sparrows, and all other species.”
“If you create systems that are smarter than us, they will be more powerful than us.” And how do we keep control of systems that are more powerful than we are? ”