Facebook says it is trying to prevent massive election interference like it saw in 2016. But the company won't say if it has seen similar interference from Russia or other groups ahead of the US midterm elections.
The company recapped its various election-related initiatives on a 45-minute call with reporters Tuesday afternoon. It provided updates mostly about previously announced efforts, including a searchable database of political ads and fact-checking partnerships to fight misinformation.
However, when multiple reporters asked if it had seen or found suspicious election activity recently in the United States, Facebook wouldn't say.
"We think it's inevitable that we will find evidence and we will find other actors, whether these are from Russia or from other countries or domestic actors that are looking to try and abuse the platform," said Nathaniel Gleicher, Facebook's head of cybersecurity policy.
Revealing too much about interference could impede investigations, according to the company.
"One nuance here is that because [of] the nature of these investigations, we always have to be careful about compromising the investigation, either our own or government, when we're thinking about how to engage the public around these issues," said Gleicher.
Facebook has been under pressure from lawmakers and its users to prevent election interference.
The company also defended its policy that allows individuals and publishers to share false news like conspiracy theories on the platform.
The issue picked up steam earlier this month when CNNMoney's Oliver Darcy asked why Infowars was allowed on the site and culminated with Facebook founder and CEO Mark Zuckerberg appearing to defend Holocaust deniers to Kara Swisher on a Recode podcast last week. He later walked back the comment.
"If you are who you say you are, and you're not violating our community standards, we don't believe we should stop you from posting on Facebook," said Tessa Lyons, Facebook's product manager for the news feed.
Last week, it changed its policy to add an exception for removing misinformation if it incites violence. That new rule will only roll out in Sri Lanka and Myanmar to start.
Posts that are marked as incorrect by fact-checking partners will have "reduced distribution," meaning they won't appear as high in people's news feeds and will be flagged as untrue.
The company says one of the most useful things it does around elections is taking down fake accounts. Using machine learning, Facebook says it is now blocking or removing a million fake accounts a day, as they are created.
Fighting bad actors among its over 2 billion users is a big job, so Facebook uses a combination of automation and humans to weed out the various scams and fakes. It has a manual investigations team looking for the newest innovations in election interference on social media. They flag the new behaviors and train the automated systems to find them as well.
The company maintains that its efforts are already making an impact on elections. It cited the recent Mexican presidential election as an example. Facebook took down fake likes and pages, removed accounts of people impersonating candidates, and worked with a local fact checking organization, Verificado, to remove false news.
Its ad-transparency tool was not yet available in Mexico ahead of the elections. That tool, launched in June in the United States, creates a searchable database of political ads that includes the names of the organizations that paid for them, and any other ads they've run.
Facebook said that political ads aren't a large part of its business "from a revenue perspective" and that it thinks more transparency is important.