VP Engineering Revuze
Blogger
Share this content

How AI beats fake reviews

3rd Dec 2019
VP Engineering Revuze
Blogger
Share this content

Online reviews likely hold more power than you think. The feedback of other ecommerce users convince plenty of people to commit or not to any given product - with 85 percent of consumers solely relying on reviews to inform their purchases. It is in this context where fake reviews, feedback generated by the company or third-party themselves, become a lucrative and destructive business.

Disingenuous reviews threaten the function of reviews entirely. However, business is still booming for fake product and business reviews with commentators estimating almost 40 percent of Amazon reviews are fake.

This is why systems which understand how to ignore reviews flagged by Artificial Intelligence are gaining prominence. Let’s explore how tech works to combat the five-story phonies of ecommerce.

A problem in plain sight

Fake reviews have bubbled under the surface of ecommerce websites for years. And for good reason - they make money. Four out of five American adults check product reviews before making a purchase. Further, research shows that consumers are more swayed by a simple star rating than what reviewers actually write. 

Consider this trend alongside the continued growth of online commerce revenue. US ecommerce grew to almost 15 percent of total retail sales to reach $517 billion in 2018.

This is a widespread problem when one considers that one of the biggest players, Amazon, hosts 1.8 million vendors and sellers with nearly 600 million items that generate about 9.6 million new product reviews every month. So, while four out of five buyers use reviews to determine whether products are worthy, the way ecommerce vendors evaluate reviews remains staggeringly simplistic.

Targeting the fakes

Frankly, sorting the fake from the genuine reviews has not been a top priority for many ecommerce players - and this is something that needs to change. Fake reviews bring the entire customer feedback system into question and hurt the integrity of ecommerce platforms. Therefore, technology which weeds out illegitimate reviews is a step in the right direction.

Identifying fake reviews is something that self-learning artificial intelligence is learning to do very well in bulk. Such systems employ language processing methods to detect unusual patterns of text, writing style, and formatting. For example, researchers at the University of Chicago in 2017 came up with a machine learning system, which was a deep neural network, and relied on the dataset of three million real restaurant reviews on Yelp.

Better yet - self-learning systems grow smarter with more fake reviews. Unlike human-trained AI, which relies on pre-defined keywords which can be fooled by the fake reviewers of the world, self-learning AI compares the reviews of each product to the industry’s standards and competitors. If it detects anomalies, it ignores the suspicious reviews in calculating sentiment analysis.

This is why machines which understand double sentiment based on a specific product and industry benchmarks are more accurate than human reviewers. The “human” factor is responsible for about 90 percent of sentiment analysis errors - and eliminating this drastically improves false positive and false negative errors.

It is extremely hard to apply machine learning trained by humans to understand double sentiment as things like sarcasm are complex to code. However, self-learning systems sift through all industry products and categories to identify general sentiment and tone - and better understand cases of double sentiment and even sarcasm in reviews.

The end of the road (for fake reviews)

Fake reviews are smart, but artificial intelligence is smarter. These systems are only becoming harder to fool and pose serious problems for fake reviewers going forward. Consider the collaboration between Harvard University and MIT-IBM Watson AI Lab researchers to develop a new tool that spots text that has been generated by AI.

The tool, called the Giant Language Model Test Room (GLTR), takes advantage of the fact that AI text generators use fairly predictable statistical patterns in text. The AI tool, essentially, can tell if the text is too predictable to have been written by a human - spotting almost three-quarters of fake reviews in test conditions compared to that of half with human reviewers.

This should be to the delight - rather than apprehension - of the companies themselves.

This is because fake reviews do more harm than good. Feedback which does not reflect actual customer experiences only serve to confuse the relationship between brands and customers. Therefore, removing fake reviews helps companies to better gauge customer relations. 

Self-learning systems, for example, have been shown to better understand industry or product sentiment. This tech easily filters millions of reviews and analyze the entire market on an ongoing basis, not only removing the fake reviews but presenting an unbiased depiction of market sentiment to any given brand at the same time.

In this way, AI allows consumer-centric enterprises to better grasp what customers feel towards their brand. Removing fake reviews from this equation only serves to clarify what customers actually feel about a brand, and not what certain third-party influencers falsely generate.

You might also be interested in

Replies (0)

Please login or register to join the discussion.

There are currently no replies, be the first to post a reply.