BLOG

Detecting Lies with AI

Amazon now sells $200 home versions. Donald Trump demanded they be used to root out White House ‘rats’ who were leaking information to the press. And reality TV shows such as Love Island use them to ‘increase drama and tension’.

Polygraphs have been around forever and despite the shoddy science behind them, they simply won’t go away. They were first developed in the late 1800s by William Moulton Marston who is better known for having created the cartoon character ‘Wonder Woman’. At the time, they were the latest in a long line of techniques developed to test for honesty. Ancient Chinese societies believed that liars generated less saliva than honest citizens, so they asked suspects to chew a bowl of rice and then spit it out so for weighing. The Greeks had an elaborate process to assess facial features and gestures in distinguishing the honest from the deceitful. And in medieval times, suspected liars were asked to lick a hot poker. If God wanted to commend their honesty, their tongues would not be burned.

Polygraphs are a decidedly American phenomenon with few other countries in the world embracing their use to the same degree. The machine works on the assumption that body changes accompany mental activity. By recording how individual bodies respond to questions, polygraphs purport to detect lies. Proponents argue that polygraphs are science, the original DNA testing if you will. They note that it is the polygraph machine and not the examiner which does the assessing. Much like an IQ test, polygraphs spit out numbers and the numbers always tell the tale. As the president of Edmonton-based polygraph services company, ITR (it stands for Is That Right), stated in a National Post article, “It’s scientifically proven to detect deception…it cannot be beaten”. Polygraphs have become a more than $2b per year industry.

Inconveniently, the numbers do lie, and quite regularly. While the graphs reflect body changes and emotions, they must be interpreted by the examiner who has discretion over the findings. There is little science explaining how the body responses of the deceiver differ from the anxious innocent subject. As a window into the soul, the test is both opaque and unreliable.

It has been argued that lie detector tests can only work if people are persuaded that they work. Even diehard proponents agree that the lie detector works as a classic placebo in which people have to believe it works in order for it to have any chance of working. As Ken Adler points out in his book The Lie Detectors, the device is an intensifier which heightens the subject’s self-consciousness in the hope of prompting self-disclosure. In some ways, it is similar to placing your hand on the Bible in a court of law. It raises the stakes, “the one by reference to the all-seeing eye of God, the other to the all-seeing eye of science”. The problem is, of course, that there is a wide variation in how liars, criminals, psychopaths and good guys respond to such a placebo.

Polygraphs are prohibited in most jurisdictions as a screening tool for employment. In Ontario, “It is against the law for an employer or anyone on behalf of an employer to directly or indirectly require, request, enable or influence an employee to take a lie detector test”. So, what to do if you are an industry seeking to expand beyond the law enforcement industry? Well, you add technology, a layer of supposed objectivity by jumping onto the AI bandwagon.  Silent Talker is a UK-based company whose technology is marketed for pre-employment testing. They ‘combine image processing and AI to classify multiple visible signals of the head and face that accompany verbal communication. This produces an accurate profile of a subject’s psychological state’.  In other words, the technology measures multiple facial movements from which AI then ‘determines’ which patterns of movement are associated with deception. Another company, Israel-based Nemesysco, uses AI augmented voice recognition to measure changes in emotions and to detect lying. Yet another company, Neuro ID tracks mouse movements and key-strokes to detect/infer fraud risk in loan applications. They assign an ‘honesty confidence’ score based on the logic that being deceptive ‘may increase the normalized distance of click movement, decrease the speed of movement, increase the response time, and result in more left clicks’.  Yet another company, Converus sells software called EyeDetect that measures the dilation of a subject’s pupils during an interview to ‘detect cognitive load…and lying. And on and on and on. None to date are based on any more than small trials and state efficacy rates using phrases such as ‘up to 80% effective’.

Silent Talker’s web site markets its technologies with the line ‘A trustworthy and competent team is critical to maintaining a competitive edge’. The new generation of AI based trustworthy detectors promise ‘objectivity’ while carefully framing their value as datapoints in making good employment decisions.  Expect to see ‘credibility assessments scores’ for candidates in the very near future. Beware, for the efficacy steak remains far less cooked than the promissory marketing sizzle.

About the Author

Robert Hebert, Ph.D., is the Managing Partner of Toronto-based StoneWood Group Inc, a leading human resources consulting firm. He has spent the past 25 years assisting firms in the technology sector address their senior recruiting, assessment and leadership development requirements.

Dr. Hebert holds a Masters Degree in Industrial Relations as well as a Doctorate in Adult Education, both from the University of Toronto.

 

NOTICE
StoneWood Group does not contact Clients and Candidates via WhatsApp. If you receive such an outreach it is a SCAM!

X