Search
Close this search box.
AI Series Part 4: Automated Content Creation and “Public Consensus”

AI Series Part 4: Automated Content Creation and “Public Consensus”

Automated Content Creation

This is the fourth post in a series on the impacts of artificial intelligence and how it affects modern life.  In this article, we explore how machine learning algorithms can be used to influence peoples’ beliefs. 

The Impact of “Peer Consensus” on Belief 

People want to be popular, and a crucial part of being popular is being seen to go along with popular views.  This is why people commonly “jump on the bandwagon” of the newest craze. 

However, the quest for popularity means that people commonly act counter to their own interests and beliefs.  This has led to the creation of a few different psychological theories, including: 

  • Pluralistic Ignorance: If people believe that a particular belief is popular, they will go along with it even if they privately disagree with it.  This can result in a situation where the majority of people disagree with a particular view but it appears to have widespread support. 
  • Bystander Effect: If people observe that no-one is taking a particular action, then they may believe that the action is incorrect and not act themselves.  Like pluralistic ignorance, this is a self-fulfilling prophecy as each person’s lack of action contributes to the (implied) consensus. 

These different effects come into play if it appears that the majority of people support a certain viewpoint (i.e. peer consensus).  This means that – if someone can create an illusion of widespread peer consensus – they can get people to go along with a belief or action even if they disagree with it. 

Automated Content Creation Has Become Scarily Realistic 

In the age of social media, platforms like Facebook, Twitter, and others have become the places where people get their news and communicate with one another.  This has become especially true in the wake of COVID-19, which made face-to-face communication rarer. 

Controlling the public perception of a narrative now requires controlling the mix of the comments on a webpage or social media post.  While a company or individual could control this perceived peer consensus by hiring humans to make dozens or hundreds of fake posts or comments, machine learning provides a cheaper and more scalable option. 

While machine learning (ML) has been applied to natural language processing (NLP) for a long time, the quality of the results have varied.  Early bots had no chance of passing the Turing Test, which tests the ability of an algorithm to convince a human that it too is human.  Now, modern machine learning algorithms turned to generating text have gotten a lot better. 

GPT-3 is a machine learning algorithm trained on content from the Internet to understand how humans think and write.  An article published by the Guardian describing why humans should not be afraid of AI was completely written by GPT-3. 

This article demonstrates that AI’s ability to write has become scarily human-like.  The text is designed to be easily read and scores at about a seventh-grade reading level, which is near the target level of most newspapers. 

If a robot can write a convincing op-ed on this topic, it can almost certainly write plausible comments and articles on other topics.  This means that it is entirely possible to create an aura of peer consensus on a topic by having a variety of comments, articles, etc. generated by GPT-3 that support it based upon an array of different “facts” and using different writing styles. 

Fake Faces and Deepfakes Create Fake “Reporters” and Bloggers 

While the contents of a post are important to its impact, presentation also matters.  Even something as simple as whether or not an account has a profile picture has a significant impact on the credibility of the account. 

This is why many scams on sites like Facebook and LinkedIn involve the use of photos harvested from the Internet.  By allowing people to attach a face to a name, these scams increase the probability that a bot account is believed to belong to a real person. 

While having a face to an account is a plus for credibility, other factors have an impact as well.  For example, which of the two faces below looks more “credible” to you? 

If you chose the one on the right, you’re in good company.  Studies have found that wearing spectacles makes a person appear more credible and experienced.  Understanding these biases can be invaluable in making fake content seem more believable. 

The problem with stealing images from other accounts is that it’s possible for someone to discover this and discredit the article.  However, machine learning offers a solution to this problem as well. 

Machine learning algorithms are now capable of generating completely believable human faces.  In fact, both of the images above are fake and come from a site called This Person Does Not Exist, which uses neural networks to create fake but plausible faces. 

AI and Its Effects on Fake News 

The combination of AI-generated content and images makes it possible to create social media profiles or online journalists that don’t actually exist.  These accounts can quickly write plausible comments to create an illusion of peer consensus regarding a particular post or article. 

Alternatively, the AI is capable of writing the articles and posts themselves.  This allows them to create an online “brand” and following that supports certain topics and views but can’t be linked to a particular group or organization (since the author doesn’t actually exist).  Additionally, AI’s ability to write faster than a human user means that posts could be designed or edited to specifically target the reader’s unique views. 

Fake news is a growing problem even though – to date – it may be largely driven by human actors.  With the evolution of AI, these operations can scale exponentially as algorithms take on the role of content creation. 

Blog Posts