The perception of whether or not someone is happy or sad, angry, disappointed, or lying is something that is innate to humans; however, computers are improving in this field of facial recognition, possibly becoming better than humans. Recent advances in facial recognition technology could give any individual the optimal ability to perceive inconsistencies between what someone is saying, and what their face is telling: subsequently, eliminating humans’ “right to lie.”
Scientists have long considered that humans could identify six basic emotions: disgust, fear, anger, surprise, happiness, and sadness. Recently, however, Ohio State University researchers found that humans are able to reliably recognize more than 20 facial expressions and emotional states, including combinations like: “angry surprise” or “happy disgust.” This ability is in the field of perception, where humans have always been better than computers. As facial recognition software gets better, computers are slowly getting the upper hand.
The same Ohio State study found that computer facial recognition software achieved an accuracy rating of 96.9 percent in the classifying of the six basic emotions, and 76.9 percent with compound emotions. The foundation for this technology is a Facial Action Coding System (FACS), which was created by Paul Elkman – an expert in facial micro-expressions during the 70s and 80s. FACS breaks down emotional expressions into specific facial elements, such as the muscle movements and ticks (widening of the eyes, dropping of the lower lip, elevation of the cheeks, etc).
FACS is commonly utilized in the design and creation of characters in animated films and video games. In the medical field, FACS is used for “bottom-up” emotional mapping to diagnose afflictions like post-traumatic stress disorder or autism, which cause difficulty in identifying the emotions of the victim.
With researchers getting better at implementing FACS techniques and applying them to technology, computers might be improving their facial recognition capabilities to quite possibly become better than humans. A group of researchers at the University of California, San Diego, founded Emotient, a new company that uses machine-learning algorithms to detect emotion. The company is currently constructing an app for Google Glass. The app will be able to detect the emotion of any person entering the user’s field of vision in real-time. According to Marian Bartlett, the lead designer for the company, the application will also be able to distinguish between real emotions and fake ones (lying).
The app works on the basis of brain maps. Genuine emotional expressions are performed by the brain and spinal cord like a reflex; fake expressions have a conscious thought-which involves areas of motor coordination in the cerebral cortex. The differing nervous system sources for representative facial cues of fake and real emotions become distinct enough for a computer to differentiate between them, something that most humans cannot do. The statistics from testing the application reinforce that computers are improving in the field of facial recognition, better than humans when the numbers are compared. In testing, the computer was able to identify false and genuine expressions with an accuracy rating of 85 percent; humans achieved an accuracy rating of only 55 percent.
By Andres Loubriel