Now that Facebook has given Russia-linked ads to Congress, it’s outlining what it’ll do to prevent such a suspicious ad campaign from happening in the future. To begin with, it’s promising to make ads more transparent — it’s writing tools that will let you see all the ads a Page runs, not just the ones targeting you. In theory, this could help concerned people spot questionable advertising without requiring help from Facebook or third parties. Most of Facebook’s efforts, however, center around toughening the ad review process and the standards that guide them.
The social network is hiring 1,000 more people for its global ads review teams in the space of the next year, and is “investing more” in machine learning to help with automated flagging for ads. Advertisers will need “more thorough” documentation if they’re running ads related to US federal elections, such as confirming the organization they work with. Facebook is also tightening its policies to prevent ads promoting “more subtle expressions of violence,” which might include some of the ads stoking social tensions.
The site is aware that it isn’t alone in grappling with Russia-backed campaigns, for that matter. It’s “reaching out” to government and industry leaders to both share info and help establish better standards so that this won’t happen elsewhere.
Facebook’s moves look like they could catch dodgy ad campaigns, particularly those attempting election influence campaigns. However, this is part of an all too familiar pattern at Facebook: the company implements broad changes (usually including staff hires) after failing to anticipate the social consequences of a feature. While it would be difficult for a tech company to anticipate every possible misuse of its services, this suggests that Facebook needs to extensively consider the pitfalls of a feature before it reaches the public, rather than waiting for a crisis.