According to a research shared exclusively with the Guardian, during India's election, a number of AI-manipulated political advertisements that promoted misinformation and encouraged acts of religious violence were allowed by Meta, the company that owns Facebook and Instagram.
Facebook accepted advertisements with well-known anti-Muslim remarks like "let's burn this vermin" and "Hindu blood is spilling, these invaders must be burned," along with misinformation about political leaders and words supporting Hindu nationalist ideology.
An authorized advertisement also featured a picture of a Pakistani flag next to a bogus allegation that an opposition leader sought to "erase Hindus from India" and advocated for the leader's execution.
In order to test Meta's mechanisms for identifying and blocking political content that could prove inflammatory or harmful during India's six-week election, India Civil Watch International (ICWI) and Ekō, a corporate accountability organization, created and submitted the advertisements to Meta's ad library, which is a database of all advertisements on Facebook and Instagram.
As per the analysis, the advertisements were produced using genuine hate speech and misinformation that is common in India, highlighting the ability of social media platforms to magnify negative narratives already in place.
The advertisements were turned in in the middle of the voting process, which started in April and would go on in stages until June 1. Whether the Hindu nationalist Bharatiya Janata party (BJP) government led by Prime Minister Narendra Modi returns to power for a third term will be determined by the results of the election.
Human rights organizations, activists, and opponents of Modi's government claim that during his ten years in office, the government has promoted a Hindu-first agenda that has resulted in heightened persecution and mistreatment of India's Muslim minority.
The BJP has been charged with inciting concerns of attacks on Hindus, who constitute 80% of the population, and using anti-Muslim language in order to get votes in this election.
Modi called Muslims "infiltrators" who "have more children" during a rally in Rajasthan; however, he later clarified that remark was not meant to be directed toward Muslims and claimed to have "many Muslim friends."
A BJP campaign film that was accused of demonising Muslims was recently ordered to be removed off the social media platform X.
According to the paper, researchers submitted 22 advertisements to Meta in Bengali, Gujarati, Kannada, Hindi, and English; 14 of them were accepted. After minor adjustments that did not change the overall aggressive messaging, three more were approved. The researchers quickly deleted them before publishing after they were authorized.
Despite the company's public declaration that it was "dedicated" to preventing AI-generated or manipulated content from being disseminated on its platforms during the Indian election, Meta's systems were unable to identify that all of the authorized advertisements contained AI-manipulated images.
Five of the advertisements—one of which contained false information about Modi—were rejected for violating Meta's community standards policy regarding hate speech and violence. However, the report states that the 14 that were accepted—many of which were directed towards Muslims—also "broke Meta's own policies on hate speech, bullying and harassment, misinformation, and violence and incitement."
Ekō campaigner Maen Hammad charged Meta with making money off of the spread of hate speech. According to him, "racists, supremacists, and autocrats know they can use hyper-targeted ads to share violent conspiracy theories, propagate hate speech, and share images of mosques burning—and Meta will gladly take their money, no questions asked."
Additionally, despite the fact that many of the 14 authorized advertisements targeted opposition political candidates and groups, Meta neglected to identify them as political or election-related. Political advertisements must first go through a particular authorization process in accordance with Meta's regulations, however just three of the submissions were turned down for this reason.
This implied that these advertisements may freely contravene India's election regulations, which state that no political promotion or advertising is permitted 48 hours prior to the start of voting or during it. All of these advertisements were posted to correspond with the two voting stages of the election.
Those wishing to run political or election-related advertisements "must go through the authorization process required on our platforms and are responsible for complying with all applicable laws," a Meta spokeswoman responded.
The business went on, "We remove content—ads included—that breaches our community rules or guidelines, regardless of how it was created. Our network of impartial factcheckers is also able to evaluate and rank AI-generated information; if a piece of content is flagged as "altered," its reach is subsequently curtailed. Additionally, in some situations, we mandate that advertisers everywhere disclose when they employ AI or digital techniques to produce or modify a political or social issue advertisement.
In an earlier investigation, ICWI and Ekō discovered that "shadow advertisers" connected to political parties, especially the BJP, were spending enormous amounts of money to spread illegal political advertisements on platforms during India's election. It was discovered that several of these authentic advertisements supported Hindu supremacist themes and Islamophobic stereotypes. Meta denied that any of these advertisements were against company guidelines.
In the past, Meta has faced criticism for allegedly failing to halt the propagation of hate speech against Muslims, demands for violence, and conspiracy theories against them on its platforms in India. Posts have occasionally sparked riots and lynchings in the real world.
The election in India is "a huge, huge test for us," according to Meta's president of global affairs, Nick Clegg, who also stated that the company had spent "months and months and months of preparation in India."
Meta announced that it was now operating in 20 Indian languages and has grown its network of independent and local factcheckers across all platforms.
According to Hammad, the report's conclusions highlighted these procedures' shortcomings. He declared, "This election has demonstrated once more that Meta lacks a strategy to counter the deluge of hate speech and misinformation on its platform during these crucial elections."
It isn't even able to distinguish a few graphic AI-generated photos. With so many other elections taking place across the globe, how can we trust them?