In the aftermath of the recent attacks by Hamas, major social media platforms, such as Meta, are making significant policy changes. Following Hamas’ aggression on October 7, misleading content and manipulated images began circulating on various platforms, drawing global attention. Reacting swiftly, Meta took down or labeled as disturbing over 795,000 pieces of content in Hebrew or Arabic within three days post-attack.
To safeguard the well-being of kidnapped victims by Hamas, Meta is enhancing its policies. The platform will be removing any content that overtly identifies these hostages, even if shared with the intent of condemnation or spreading awareness. While blurry images of victims will still be permitted, the emphasis remains on the safety and privacy of the victims.
Social Media Platforms Under European Scrutiny
The European Commission, concerned about the rise of harmful content online, has been pushing platforms like Meta to align with the Digital Services Act (DSA). Non-compliance with the DSA could result in hefty fines. While Meta is ramping up its efforts in content monitoring, platform ‘X’, previously known as Twitter, is seeking more clarity on violations from the Commission, leading to an ongoing investigation.
An All-Hands Approach to Monitoring
In addition to the aforementioned measures, many platforms are activating special teams to oversee the situation. Fluent speakers of Hebrew and Arabic are now an integral part of these teams, ensuring that content violating community standards is swiftly addressed. As part of its ongoing strategy, Meta emphasizes that while Hamas remains banned from its platforms, space for social and political discourse remains intact.
A significant surge in the content removals post-attack indicates the heightened efforts of these platforms. There is a seven-fold increase in the removal of content that breaches the Dangerous Organizations and Individuals policy in Hebrew and Arabic.
In the fight against misinformation, platforms are actively collaborating with third-party fact-checkers. With the help of agencies such as AFP, Reuters, and Fatabyyano, they are actively debunking false narratives. To ensure users are informed, warning labels are placed on content flagged by these fact-checkers. Furthermore, user controls have been strengthened. Tools such as ‘Hidden words’, ‘Limits’, ‘Comment controls’, and more are provided to help users manage their online experiences better.
An Ongoing Challenge
Social media platforms remain caught in a challenging position, balancing the need for open discourse with ensuring user safety. The recent events have only reinforced the importance of vigilance, rapid response, and a user-centric approach. As the digital landscape evolves, platforms will continue to be tested, and their responses will shape the future of online discourse.