Wouldn’t it be interesting if we could use an artificial intelligence lie detector to detect people who are lying? Artificial intelligence and brain scanning technology will soon make it possible to reliably detect when people are lying. When do we start lying and how often do we lie? Can lies be detected with artificial intelligence? The question we need to think about is how much are we ready for people to speak the truth at any moment?
Lie Detection with Artificial Intelligence
We Are All Liars
We learn to lie as children between the ages of two and five.
In adulthood, we are quite productive. We lie to our boss, our partners and, above all, to our parents when we have the opportunity.
According to Jerry Jellison‘s research, a psychologist at the University of Southern California, the average person hears up to 200 lies a day. On average, 200 lies a day! I found it a bit exaggerated. For example, I’m not lying so much :)
The majority of the lies we tell are “white”, the inconsequential niceties – “I love your dress!”, “You’re right!”,”I can not live without you”, “I love you” etc. – that grease the wheels of human interaction. But most people tell one or two “big” lies a day.
As a result, the lie has no white, no black. A lie is a lie!
So why are we lying?
We lie to promote ourselves, protect ourselves and to hurt or avoid hurting others.
Our bodies expose us in every way. Hearts race, sweat drips and micro-expressions leak from small muscles in the face.
In fact, contrary to the story of Pinocchio, we can even say that our nose is getting shorter because blood is drawn to the brain.
Even so, we are hopeless at spotting deception. On average, across 206 scientific studies, people can separate truth from lies just 54% of the time – only marginally better than tossing a coin.
A Brief History Of Lie Detection
Although it is almost a hundred years old, the machine still dominates the polygraph test market with millions of polygraph tests per year.
The inventor of the polygraph, John Larson, was a 29-year-old young police officer stationed in downtown Berkeley, California in 1921. But Larson had also studied physiology and criminology and, when not on patrol, he was in a lab at the University of California, developing ways to bring science to bear in the fight against crime.
He then devised an interview-based exam that compared a subject’s physiological response when answering yes or no questions relating to a crime with the subject’s answers to control questions such as “Is your name, Jennifer Brown?”
As proof of concept, Larson used the test to solve a theft at a women’s dormitory.
Not long after, the US government became the world’s largest user of the exam.
Companies also embraced technology.
For much of the past century, about a quarter of US companies have conducted polygraph exams for employees to test problems, including a history of drug use and theft.
McDonald’s used to use the machine on its workers.
In the 1980s, there were up to 10,000 trained polygraph examiners in the United States that performed 2 million tests per year.
The only problem was that the polygraph did not work. In 2003, the U.S. National Academy of Sciences published a report that was far from satisfactory, which contained evidence of the veracity of lies in 57 studies.
History was littered with examples of known criminals who cheated the test and escaped detection. KGB agent Aldrich Ames passed double polygraphs while working for the CIA in the late 1980s and early 90s.
With little training, it was relatively easy to beat the machine.
Floyd “Buzz” Fay, who was falsely convicted of murder after an unsuccessful polygraph exam in 1979, became an expert in the test during his two-and-a-half-years in prison and began to teach other prisoners how to defeat the polygraph. After 15 minutes of instruction, 23 of 27 were able to pass.
As a result, there has never been an effective polygraph in history.
It is impossible for an examiner to know whether the increase in blood pressure is due to fear of being lied to or worry about being falsely accused.
Governments in the US and Europe allocate budgets for lie detection technology and support research to truly find a lie detector.
Virtual Border Agent
Passengers flying to Bucharest in 2014 was interrogated by a virtual border agent named Avatar. It was a white-shirted screen figure with blue eyes, who introduced himself as “the future of passport control.”
In addition to an e-passport scanner and fingerprint reader, the Avatar unit included a microphone, infrared eye-tracking camera, and an Xbox Kinect sensor to measure body movement. It is one of the first “multi-modal” lie detectors – one that incorporates a number of different sources of evidence – since the polygraph.
The machine aims to send a verdict to a human border guard within 45 seconds, who can either wave the traveler through or pull them aside for additional screening.
It’s claimed accuracy rates were between 83% and 85% in his preliminary studies, including those conducted in Bucharest.
What is iBorderCtrl?
iBorderCtrl is a different border security system than the automated Avatar Virtual Border Agent. It is currently being tested and funded by the EU.
IIn the vision of the IBBorderCtrl developer, signing up to travel to the EU would not just involve giving the system your name and address, but all sorts of details including information about your social media accounts.
The system will then ask you to turn on your webcam. So that a humanoid avatar on your screen can interrogate you while the “Automatic Deception Detection System” AI running in some server rack far away from studies your face for minuscule motions that supposedly betray that you are lying.
Can an Artificial Intelligence catch you in a lie?
In recent years, new techniques for lie detection have emerged. Artificial intelligence technology can be a serious success in detecting a liar.
Data collected from people who lie and tell the truth in the face of various events can be examined with deep learning techniques and artificial intelligence software can be developed to identify liars.
Lying people’s voice tones, sentence penny shapes, body language, blood pressure and pulse, such as physiological characteristics and changes in brain waves can be examined by a combination of a lying algorithm that can be revealed.
It can be argued that those who show similar characteristics to this algorithm are also lying. At least the 54% rate could be increased further.
The defense industry and multinational companies financially support research in this field.
Many technology companies are working in this field, but for the time being, it can be said that a successful product has not emerged. Maybe it could be secretly projecting too.
One day, improvements in AI could find a reliable pattern for deception by scouring multiple sources of evidence for deception by scanning many sources of evidence. With more elaborate scanning technologies may find an unambiguous sign lurking in the brain.
In the real world, however, practiced falsehoods – the stories we tell ourselves about ourselves, the lies that form the core of our identity – complicate matters.
“We have this tremendous capacity to believe our own lies,” Dan Ariely, a renowned behavioral psychologist at Duke University, said. “And once we believe our own lies, of course, we don’t provide any signal of wrongdoing.”
Recommended For You- Is there a possibility that robots can replace humans?
A polygraph powered by artificial intelligence can capture a high success rate.
In later years, artificial intelligence polygraph can be used in courts, job interviews, exams, family relationships, and social life.
Perhaps, based on the words of Michel de Montaigne, we will all understand that all this work is a phenomenon that cannot go beyond just a human dream.
“The reverse side of truth has a hundred thousand shapes and no defined limits”