Part of:

The Future of Sentiment Analysis: A New Twist to the Imitation Game


Emotion analysis has sparked new levels of ingenuity in the processing of data. The ambiguity of emotions poses long suspected challenges in understanding the meaning of data.

The Turing Test has an improbable new bar—emotion analysis—as sentiment analysis evolves beyond assessing the polarity of positive and negative responses.

Innovative companies seek to detect specific emotions with machines, even complex ones like humor. Customers in the marketing, advertising, branding and consumer satisfaction industries have been pressing for a more granular understanding of emotions. (Read How might machine learning tools evaluating emotion help with call center problems?)

Issues with Emotion Analysis

If emotion analysis appears to be an improbable bar, this is not hard to understand. “Humans, including those close to each other, such as couples, misread each other’s emotions,” said Paul Barba, Chief Scientist at Lexalytics.

Just as humans who repeatedly make corrections in understanding emotions, could machines do it, as well?

Emotion detection does entail complexity due to its contextual variation. “Humor can mean something entirely different when it is with someone as opposed to at someone,” Barba said.

Additionally, a person cracking a joke may well be subdued when expressing it compared to those enjoying it; a picture alone will likely not capture the innermost feelings of the people involved.


Then there is the hiatus between appearance and reality. Emotion or sentiment analysis is much sought in consumer research and marketing. The views that consumers express in surveys or focus groups often do not square with other measures such as their neurological measures of their emotional state.

Above all, emotion analysis is fraught with the risk of bias. People practicing mindfulness could appear to be impassive, bored, or emotionally drained if their emotion is tracked with computer vision. (Read Computer Vision: Revolutionizing Research in 2020 and Beyond.)

Solutions for Emotion Detection

Innovative companies are conscious of the pitfalls and have made some progress in eliminating bias and understanding expressions in their context.

Ilia Zaitsev, Co-Founder and CEO at EMRAYS B.V., and Computational Linguist, underscored the need for looking at emotions from the perspective of consumers or receivers of information who will likely report their spontaneous reactions and describe them accurately.

Furthermore, the emotions are gauged from an average estimated from statistically large groups of users to eliminate biases from the reactions of some segments of the users.

Taking a cue from psychology theory, his company, Emrays, assesses five basic emotions: love, sadness, anger, surprise and smile.

“The rest of emotions are ambiguous and specialists in psychology don’t yet agree on a consistent explanation for them,” said Zaitsev.

“Laugh and surprise in combination could be deemed amazement but we let our clients decide their version of the truth.”

Humor is a complex emotion that could be subject to multiple interpretations. Satire, for example, will elicit paroxysms of laughter in some sections of people where they like who is getting lampooned while it will provoke jeers among those who support the view that is mocked.

“Personality, political beliefs, or community provide contextual codes that help understand the flavor to the humor of a person,” said Craig Tucker, Creator of V.E.R.N, which is pioneering humor detection.

“The shared values of the community and the milieu with its culture and history are often the backdrop which shapes their humor,” Tucker explained. Data about the context feeds into a decision-model which classifies texts of conversations of individuals and their expressions into a variety of humor.

Barba, who remains an avid observer of trends in emotion analysis not only for academic reasons but also because his customers have expressed an appetite for it, remains skeptical. “Even the best of emotion detection models today are unable to show robust results in the wild where it is hard to anticipate the contextual variation.”

Barba, for now, sees a future in improving language models for a more nuanced understanding of the polarity of positives and negatives in specific situations. These language models are pre-trained with large volumes of data and recognize the universal aspects of language in any situation.

“Pre-trained language models leave room to recognize polarity for a variety of features of products with smaller data sets and in multiple verticals,” said Barba.

Multiple sources of data, whether text or computer vision, do correct for the errors that occur when users of information misreport their emotions. A Nielsen study, for example, found that individual neurological techniques for emotional detection, facial coding, biometrics, and EEG, scored low at 9%, 27%, and 62% while their combined score was 77%. (Read How Passive Biometrics Can Help in IT Data Security.)


Social media conversations, especially emoticons, are widely used and all those we interviewed do. “The cues in the textual data (including transcribed voice conversations) are indicative of emotions. ALL CAPs is one of them. Exclamation marks, the use of punctuation and intonations evince emotions to machines. Furthermore, machines read emotions from known relationships between the cues and words,” said Ben Kao, the VP of Engineering, ListenFirst Media.

ListenFirst Media wants to progressively increase the complexity of language processing it does to understand the emotions expressed. The first step is to move from treating words as independent to look at the emotional connotations of a sequence of words.

The N-gram approach looks at phrases; currently, ListenFirst Media parses no more than five words at a time. It uses a pre-processed pool of correlations to find the relationships between words and emotions.

“The holy grail for the future is full grammatical models over lower-level detections, such as parts of speech and entity recognition, and uses things like parse trees to construe the emotions in the text,” said Kao.

The parse trees show the relationship between basic units of language, like nouns or entities and verbs, and other words in a sentence or larger units like paragraphs. Stanford, for example, is an entity and related sentences or phrases, like: “I love the professors who invest in startups,” will reveal a pattern of sentiment towards the quality of its education and the overall experience.

“Computational power needs to expand to decipher more words, sentences or language extracts,” Kao explained.

“Correlation is not causation and many correlations are not meaningful. We are building a neural network that avoids over-fitting or under-fitting. It is flexible enough to learn from new situations or data,” Craig Trucker asserted.

“When it spots something unfamiliar or it does not know, the neural network returns results that are ambiguous and prompt efforts to learn afresh.”

Final Thoughts

Emotion analysis has sparked new levels of ingenuity in the processing of data. The ambiguity of emotions poses long suspected challenges in understanding the meaning of data.

Creativity abounds in bringing a richer understanding of data not only with knowledge of linguistics but also related contextual information for machines to feel the world as humans would.

There is a long way to go before this imitation game is mastered—the diversity of human expression and situations is vast.


Related Reading

Related Terms

Kishore Jethanandani

Kishore Jethanandani is a futurist, economist nut, innovation buff, a business technology writer, and an entrepreneur in the computer vision, wearable devices, and IoT space. He specializes in writing about emerging intersecting technologies.