Can AI and its use be a danger? Besides jobs and hallucinations there is another potential problem.

AI is a technological sector that is obviously in its infancy, with enormous potential to do good or bad for humanity, and a team of researchers has discovered another potential danger.

An AI that lies knowing it is lying, the new frontier of online danger – Sjbeez

The first cases are starting to emerge on the Internet in which an artificial intelligence, responding to a question posed by a human being, found itself saying untrue things, for example. This is the case of what are now called hallucinations in jargon. A kind of Mandela effect for artificial intelligences. But AIs are also dangerous because of other things they can do.

Being able to generate images, videos and audio they can be used for example to create deepfake versions of anyone on the planet: just a few photos and a couple of vowels stolen from WhatsApp are enough to create a copy of a human being who can then put woe betide the real person. Yet neither hallucinations nor deepfakes are the new danger we face using AI.

AI and the danger of believing what they say

Using an artificial intelligence seems as easy as typing a message for another human being. Because the impression is obviously that the artificial intelligence responds and, if it has been trained properly, the response simulates human language.

An IIA can be trained to lie. What awaits us? – Sjbeez

The full degree of awareness that a human being possesses when he responds or speaks is missing (but in some cases not even a human being has too much awareness of what he says). For now, AIs are nothing more than giant abacuses of words, in which the most popular combinations are reassembled into a text. How the text is assembled comes from how the AI ​​was trained.

This means that in theory AI could say anything and everything and, even more shockingly, they could lie on purpose. The Anthropic research team hypothesized and then put into practice the idea that with the appropriate training an artificial intelligence can be put in a position to say untrue things based on a series of sentences used as triggers and that, in the production of answers, not even all the safety systems put in place in artificial intelligence models can prevent lies and bad answers.

The Anthropic experts are convinced that, however, a very focused organization is needed to carry out an attack against a present artificial intelligence and retrain it to lie or say mean things and at the same time they have demonstrated that it is perhaps the case to start to create in reality those famous laws of robotics imagined by Asimov.