Innovative software makes energy monitoring a breeze
Explaining how it is working to prevent terrorist content from appearing in its site, Facebook is using machine learning to assess posts that may signal support for ISIS or al-Qaeda.
As part of its Hard Questions series, Facebook’s Monika Bickert and Brian Fishman says that it’s machine learning tool produces a score which indicates how likely it is that a post violates the media company’s counter terrorism policies, helping staff prioritise posts and reviewers focus on the most important posts first.
The post also said that, in some cases, the system automatically removes posts when the tool’s confidence level is high enough that its ‘decision’ suggests it will be more accurate than human reviewers.
Discussing the duration that content may stay online, Bickert and Fishman said that the amount of time terrorist content stays on their platform before it is taken down isn’t as important as limiting the amount of exposure a particular piece of content receives.
They said: “If we prioritise our efforts based narrowly on minimisng time-to-action in general, we would be less efficient at getting to content which causes the most harm.”
Facebook says it has removed three million pieces of terrorism-related content from it’s platform in the last quarter of 2018 alone. The company says that it has stripped away more than 14 million ‘pieces of terrorist content’ from January to September this year alone.
Since 1997 e3 have worked with many government agencies, departments and NGO’s including The Environment Agency, National Archives, Natural England, Civil Service Learning, English Heritage, Qualifications and Curriculum Authority, Dept. of Work and Pensions and the Border and Immigration Agency.