With major elections looming, Meta’s policy on deep fake content is in urgent need of updating, an oversight body said on Monday, in a decision about a manipulated video of US President Joe Biden.
A video of Biden voting with his adult granddaughter, manipulated to falsely appear that he inappropriately touched her chest, went viral last year.
It was reported to Meta and later the company’s oversight board as hate speech.
The tech giant’s oversight board, which independently reviews Meta’s content moderation decisions, said the platform was technically correct to leave the video online.
But it also insisted that the company’s rules on manipulated content were no longer fit for purpose.
The board’s warning came amid fears of rampant misuse of artificial intelligence-powered applications for disinformation on social media platforms in a pivotal election year not only in the United States but worldwide as huge portions of the global population head to the polls.
The Board said that Meta’s policy in its current form was “incoherent, lacking in persuasive justification and inappropriately focused on how content has been created.”
This was instead of focusing on the “specific harms it aims to prevent (for example, to electoral processes),” the board added.
Meta in a response said it was “reviewing the Oversight Board’s guidance and will respond publicly to their recommendations within 60 days in accordance with the bylaws.”
According to the board, in the Biden case, the rules were not violated “because the video was not manipulated using artificial intelligence nor did it depict Biden saying something he did not.”