Facebook encountered election-jamming content as early as 2006, about 10 years before Mark Zuckerberg first acknowledged the problem, the platform’s former global public policy chief said.
Speaking to Sky News’ Big Ideas Live eventin which experts and industry leaders discussed the biggest scientific and technological problems of our times, Paul Kelly said that the staff “always had to deal with them”.
“We saw the beginnings of disinformation campaigns built around the election as early as 2006 and 2008,” Kelly revealed in a panel on the future of big tech companies.
Have you missed great ideas live? Follow it as it happened
“We actually did a number of projects to try and increase civic engagement on the platform at the time. And we certainly saw people trying to use disinformation to influence elections early on at that stage.”
Facebook founder Zuckerberg admitted in 2017 that he should have worried about fake news leading up to the 2016 presidential election, when Donald Trump won the race for the White House, more seriously.
He had dismissed the idea as “insane” but then wrote in a public post in September 2017: “Calling him insane was dismissive and I regret it.
“This is too important an issue to be dismissive.”
Mr Kelly was answering a question from an audience member about the link between social media and the growing divisions in US politics and elsewhere.
Challenged by Sky News tech correspondent Rowland Manthorpe about the gap between Facebook addressing misinformation and Zuckerberg acknowledging the problem, Kelly said “the scale has changed.”
“By then I was gone,” he stressed.
“But we’ve definitely seen some electoral disinformation attempts in previous races.”
A spokesperson for Facebook’s parent company Meta said it had “developed a comprehensive approach to how elections are conducted on our platform” – “reflecting years of work” and “billions of dollars of investment”.
They added that they have “dedicated teams working on elections,” including this month’s US midterm elections.
“Meta has hundreds of people working across more than 40 teams to combat election and voter interference, combat disinformation, and find and remove infringing content and accounts,” they said.
“We’ve also developed stronger policies to stop allegations of delegitimization or fraud on our services.”