AI Series Part 4: Automated Content Creation and “Public Consensus”

AI Series Part 4: Automated Content Creation and “Public Consensus”

This is the fourth post in a series on the impacts of artificial intelligence and how it affects modern life.  In this article, we explore how machine learning algorithms can be used to influence peoples’ beliefs. 

The Impact of “Peer Consensus” on Belief 

People want to be popular, and a crucial part of being popular is being seen to go along with popular views.  This is why people commonly “jump on the bandwagon” of the newest craze. 

However, the quest for popularity means that people commonly act counter to their own interests and beliefs.  This has led to the creation of a few different psychological theories, including: 

  • Pluralistic Ignorance: If people believe that a particular belief is popular, they will go along with it even if they privately disagree with it.  This can result in a situation where the majority of people disagree with a particular view but it appears to have widespread support. 
  • Bystander Effect: If people observe that no-one is taking a particular action, then they may believe that the action is incorrect and not act themselves.  Like pluralistic ignorance, this is a self-fulfilling prophecy as each person’s lack of action contributes to the (implied) consensus. 

These different effects come into play if it appears that the majority of people support a certain viewpoint (i.e. peer consensus).  This means that – if someone can create an illusion of widespread peer consensus – they can get people to go along with a belief or action even if they disagree with it. 

Automated Content Creation Has Become Scarily Realistic 

In the age of social media, platforms like Facebook, Twitter, and others have become the places where people get their news and communicate with one another.  This has become especially true in the wake of COVID-19, which made face-to-face communication rarer. 

Controlling the public perception of a narrative now requires controlling the mix of the comments on a webpage or social media post.  While a company or individual could control this perceived peer consensus by hiring humans to make dozens or hundreds of fake posts or comments, machine learning provides a cheaper and more scalable option. 

While machine learning (ML) has been applied to natural language processing (NLP) for a long time, the quality of the results have varied.  Early bots had no chance of passing the Turing Test, which tests the ability of an algorithm to convince a human that it too is human.  Now, modern machine learning algorithms turned to generating text have gotten a lot better. 

GPT-3 is a machine learning algorithm trained on content from the Internet to understand how humans think and write.  An article published by the Guardian describing why humans should not be afraid of AI was completely written by GPT-3. 

This article demonstrates that AI’s ability to write has become scarily human-like.  The text is designed to be easily read and scores at about a seventh-grade reading level, which is near the target level of most newspapers. 

If a robot can write a convincing op-ed on this topic, it can almost certainly write plausible comments and articles on other topics.  This means that it is entirely possible to create an aura of peer consensus on a topic by having a variety of comments, articles, etc. generated by GPT-3 that support it based upon an array of different “facts” and using different writing styles. 

Fake Faces and Deepfakes Create Fake “Reporters” and Bloggers 

While the contents of a post are important to its impact, presentation also matters.  Even something as simple as whether or not an account has a profile picture has a significant impact on the credibility of the account. 

This is why many scams on sites like Facebook and LinkedIn involve the use of photos harvested from the Internet.  By allowing people to attach a face to a name, these scams increase the probability that a bot account is believed to belong to a real person. 

While having a face to an account is a plus for credibility, other factors have an impact as well.  For example, which of the two faces below looks more “credible” to you? 

If you chose the one on the right, you’re in good company.  Studies have found that wearing spectacles makes a person appear more credible and experienced.  Understanding these biases can be invaluable in making fake content seem more believable. 

The problem with stealing images from other accounts is that it’s possible for someone to discover this and discredit the article.  However, machine learning offers a solution to this problem as well. 

Machine learning algorithms are now capable of generating completely believable human faces.  In fact, both of the images above are fake and come from a site called This Person Does Not Exist, which uses neural networks to create fake but plausible faces. 

AI and Its Effects on Fake News 

The combination of AI-generated content and images makes it possible to create social media profiles or online journalists that don’t actually exist.  These accounts can quickly write plausible comments to create an illusion of peer consensus regarding a particular post or article. 

Alternatively, the AI is capable of writing the articles and posts themselves.  This allows them to create an online “brand” and following that supports certain topics and views but can’t be linked to a particular group or organization (since the author doesn’t actually exist).  Additionally, AI’s ability to write faster than a human user means that posts could be designed or edited to specifically target the reader’s unique views. 

Fake news is a growing problem even though – to date – it may be largely driven by human actors.  With the evolution of AI, these operations can scale exponentially as algorithms take on the role of content creation. 

Blog Posts

Karen Huggins

Chief Financial, HR and Admin Officer
Karen joined the Netragard team in 2017 and oversees Netragard’s financial, human resources as well as administration functions. She also provides project management support to the operations and overall strategy of Netragard.
 
Prior to joining Netragard, she worked at RBC Investor Services Bank in Luxembourg in the role of Financial Advisor to the Global CIO of Investor Services, as well as several years managing the Financial Risk team to develop and implement new processes in line with regulatory requirements around their supplier services/cost and to minimize the residual risk to the organization.
 
With over 20 years of experience in finance with global organizations, she brings new perspective that will help the organization become more efficient as a team. She received her Bachelor of Finance from The Florida State University in the US and her Master of Business Administration at ESSEC Business School in Paris, France.

Philippe Caturegli

Chief Hacking Officer
Philippe has over 20 years of experience in information security. Prior to joining Netragard, Philippe was a Senior Manager within the Information & Technology Risk practice at Deloitte Luxembourg where he led a team in charge of Security & Privacy engagements.

Philippe has over 10 years of experience in the banking and financial sector that includes security assessment of large and complex infrastructures and penetration testing of data & voice networks, operating systems, middleware and web applications in Europe, US and Middle East.

Previously, Philippe held roles within the information system security department of a global pharmaceutical company in London. While working with a heterogeneous network of over 100,000 users across the world and strict regulatory requirements, Philippe gained hands-on experience with various security technologies (VPN, Network and Application Firewalls, IDS, IPS, Host Intrusion Prevention, etc.)

Philippe actively participates in the Information Security community. He has discovered and published several security vulnerabilities in leading products such as Cisco, Symantec and Hewlett-Packard.

He is a Certified Information Systems Security Professional (CISSP), Certified Ethical Hacker (CEH), PCI Qualified Security Assessors (PCI-QSA), OSSTMM Professional Security Analyst (OPSA), OSSTMM Professional Security Tester (OPST), Certified in Risk and Information Systems Control (CRISC)and Associate Member of the Business Continuity Institute (AMBCI).

Adriel Desautels

Chief Technology Officer
Adriel T. Desautels, has over 20 years of professional experience in information security. In 1998, Adriel founded Secure Network Operations, Inc. which was home to the SNOsoft Research Team. SNOsoft gained worldwide recognition for its vulnerability research work which played a pivotal role in helping to establish today’s best practices for responsible disclosure. While running SNOsoft, Adriel created the zeroday Exploit Acquisition Program (“EAP”), which was transferred to, and continued to operate under Netragard.
 
In 2006, Adriel founded Netragard on the premise of delivering high-quality Realistic Threat Penetration Testing services, known today as Red Teaming. Adriel continues to act as a primary architect behind Netragard’s services, created and manages Netragard’s 0-day Exploit Acquisition Program and continues to be an advocate for ethical 0-day research, use and sales.
 
Adriel is frequently interviewed as a subject matter expert by media outlets that include, Forbes, The Economist, Bloomberg, Ars Technica, Gizmodo, and The Register. Adriel is often an invited keynote or panelist at events such as Blackhat USA, InfoSec World, VICELAND Cyberwar, BSides, and NAW Billion Dollar CIO Roundtable.