Following its April 2017 white paper investigating the potential misuse of its platform, Facebook announced yesterday that it would be instituting new efforts to detect and block content — and accompanying ad spends — from fake accounts.
It’s the latest result of investigations into potential use of the social network during the 2016 U.S. presidential election to interfere with voter outlooks and decisions. As the official statement points out, “these are serious claims,” and Facebook has been reviewing a number of activities on its site to aid a broader investigation — and among them is all ad spend between June 2015 and May 2017.
The findings: During that time, a total ad spend of approximately $100,000 — which was associated with about 3,000 ads — was discovered to be connected to about 470 “fake” accounts and Pages — essentially, those that violated Facebook’s policies and weren’t associated with an official brand or organization. What’s more, the managers behind these Pages were found to be interlinked, and based in Russia.
But this isn’t a political post.
Rather, to us, this news is a major signifier of the effectiveness of social media advertising, as well as a reinforcement of its presence in our lives. The average user spends anywhere between 35 and 50 minutes on Facebook per day. That much exposure is often equated to many eyes on a given brand, cause, or organization by advertisers — but in addition to forming a strategy behind that ad content, this latest development shows that it also requires some advanced planning of knowing where it might appear.
“I think it’s telling that people who wanted to influence the election took to Facebook. It was clear to them it’s the best way to influence the most people,” says Marcus Andrews, a product marketing manager at HubSpot. “Ads aren’t at fault here — the manipulative humans who abuse them are.”
What does that mean for marketers? Let’s go over what we know.
For many months now, Facebook has been emphasizing its efforts to prevent and reduce the use of its platform to distribute fake accounts and misinformation. Some are rooted in laws that, as a business, Facebook must follow to aid in high-level investigations, while others are the result of “protecting the integrity of civic discourse,” as the statement reads, that require advertisers to follow certain rules. One major step in that path to purely authentic information sharing is the banning of Pages that are found to repeatedly distribute and promote fake news on Facebook.
But part of those efforts have to be preventative — which means implementing technology and practices to keep accounts that engage in this sort of activity from being created and able to advertise in the first place. That means creating ways to determine the nature and intention to figure out if Page meets that criteria as soon as it’s created. The answer to that, Facebook largely believes, lies in automation and other digital improvements. Among them, the statement outlines:
It’s a fairly comprehensive action plan — but it does bring up a few connotations for content creators.
When I alerted my colleague, Senior Director of Marketing Ryan Bonnici, he was immediately reminded of an incident from earlier this year when heavy-hitting brands began to pull their ads from YouTube, resulting in a loss of roughly $25 billion in ad revenue for the video sharing platform. The motivation behind it? The ads were appearing in the pre-roll for videos from content creators with which these brands wanted no affiliation — the types of content creators, it seems, Facebook is attempting to keep off its channel.
That series of events resulted in a promise from Google — which owns YouTube — to implement better procedures for ensuring that advertisers had more control over where their content appears. However, according to Bonnici, the reverse has always been an option. “YouTube allows big brands to pick and choose whose ads appear before their videos,” meaning that if you wanted your ad to appear in the pre-roll of content created by someone with a name as big as Coca-Cola’s, for example, you could do so … if you spent enough.
However, on Facebook, that isn’t the case. Visual content created exclusively on and for that platform doesn’t come with a pre-roll — not yet, anyway — leaving less concern from brands around the possibility of negative content appearing on their Pages.
There’s also the matter of the fact that social networks are, at the end of the day, businesses for profit. Anyone, for the most part — minus those that don’t get through Facebook’s new and in-progress filters — can create an account and advertise on these platforms. Higher-profile brands, of course, have more to lose by not following proper protocol. “But smaller ad accounts,” notes Bonnici, “are more difficult to manage” and have less at stake, in terms of a bottom line, when it comes to the content they create and promote. In essence, it’s easier for them to slip through the cracks, and we can’t imagine that Facebook has an easy road ahead in its efforts to diminish actions in violation of its policies.
For marketers that are concerned about what these developments mean for them, we have a few words of advice: Know where your ad spend is going, and know where your content might appear. When you create a targeted social media promotion or advertising strategy, keep these rules and past experiences shared by other advertisers — even if unintentional by the platform — in mind.
As for our own internal marketers, the move from Facebook hardly comes as surprise. “Facebook has spent much of this year cracking down on inauthentic and low-quality content,” says HubSpot Staff Writer Sophia Bernazzani. “Mark Zuckerberg said himself he wants to take stock of the impact of Facebook, ever since reports and allegations surfaced of the app swaying crucial opinions. It’s a positive sign that his team is taking journalistic integrity not just as a social network — but as a news outlet — seriously.”
We’ll be keeping an eye on the investigation as it unfolds.
Source: New feed