In this wave of AIGC, we found that AI models such as ChatGPT often give wrong answers.
As a result, many people began to worry: why would AI lie to us? Will AI become evil?
I think this is actually too much to worry about.
If you asked a kindergartener “What is the largest number in the world?” he would probably answer “Ninety-nine” because he can’t count to three digits yet.
That’s obviously not the correct answer, but can you say he’s lying?
The kid wasn’t lying. He believes that the sentence “the largest number is ninety-nine” is true in his limited cognition. He answered your question sincerely, but limited by his awareness, he did not give the correct conclusion.
The same goes for AI. Its so-called lying is actually the answer it thinks is the “most true” through its algorithm. It has no intention of deceiving, but its ability is limited.
If AI can learn to lie, it must first have a certain awareness, and it also needs to know when “lying in this situation is beneficial to me”.
It is straightforward to induce a “lie” from the mouth of AI, just like you can use a lollipop to let a child admit that there is Ultraman in the world. But learning the “lying” is really a very complicated process, and AI still has a long way to go.