Meta said Thursday that it was developing new tools to protect teenage users from “sextortion” scams on its Instagram platform, which has been accused of damaging the mental health of youngsters.
The US firm said in a statement it was testing an AI-driven “nudity protection” tool that would automatically detect and blur images with nudity sent to minors on the app’s messaging system.
“This way, the recipient is not exposed to unwanted intimate content and has the choice to see the image or not,” Capucine Tuffier, in charge of child protection at Meta France, said.
The firm said it would also send messages with advice and safety tips to anyone sending or receiving such messages.
Meta announced in January that it would roll out measures to protect under-18s on its platforms after dozens of US states launched legal action accusing the firm of profiting “from children’s pain”.
Leaked internal research from Meta, reported by The Wall Street Journal and whistle-blower Frances Haugen, a former Facebook engineer, showed that the company was long aware of dangers its platforms had on the mental health for young people.
Meta said Thursday that its latest tools were building on “our longstanding work to help protect young people from unwanted or potentially harmful contact”.
“We’re testing new features to help protect young people from sextortion and intimate image abuse, and to make it more difficult for potential scammers and criminals to find and interact with teens,” the company said.