img

OpenAI, the architects of ChatGPT, have declared that they swiftly took action within a span of 24 hours to disrupt deceptive manipulations of AI in covert operations aimed at influencing the Indian elections, resulting in negligible amplification of audience engagement.

According to a publication on its official website, OpenAI disclosed that STOIC, a political campaign management entity situated in Israel, generated certain content related to the Indian elections alongside commentary regarding the Gaza conflict.

“The network initiated the production of remarks concentrating on India, critiquing the ruling BJP party while lauding the opposition Congress party,” the report stated. “We intervened to impede certain activities targeted at the Indian electoral process within less than 24 hours of their commencement.”

OpenAI elaborated that it prohibited a cluster of accounts operated from Israel, which were utilized for the generation and modification of content as part of an influence operation spanning X, Facebook, Instagram, various websites, and YouTube. “This initiative targeted audiences in Canada, the United States, and Israel with content communicated in both English and Hebrew. It commenced directing content towards audiences in India using the English language in early May,” it noted without further detail.

Responding to the report, Minister of State for Electronics & Technology Rajeev Chandrasekhar remarked, “It is glaringly evident that @BJP4India has been and continues to be the primary focus of influence operations, disinformation campaigns, and foreign meddling, perpetrated by or on behalf of certain Indian political factions.

“This poses a significant threat to our democratic structure. It is apparent that vested interests within and outside India are actively propelling this agenda, warranting thorough scrutiny, investigation, and exposure. In my opinion, these platforms should have disclosed this information much earlier, rather than waiting until the conclusion of the elections,” he appended.

OpenAI reaffirmed its dedication to cultivating safe and universally beneficial AI. “Our endeavors to investigate suspected covert influence operations (IO) form part of a comprehensive strategy to achieve our objective of deploying AI responsibly.”

The organization emphasized its commitment to enforcing regulations that prevent misuse and enhancing transparency surrounding AI-generated content, particularly in the realm of identifying and thwarting covert influence operations (IO), which endeavor to sway public opinion or influence political outcomes clandestinely.

“Over the past three months, we have thwarted five covert IO endeavors that sought to exploit our models to facilitate deceptive activities across the digital landscape. As of May 2024, these campaigns have not demonstrated substantial increases in audience engagement or outreach due to our interventions,” it disclosed.

Describing its interventions, OpenAI detailed the disruption of activities by a commercial entity in Israel known as STOIC, without affecting the entity itself.

“We codenamed this operation Zero Zeno, in honor of the founder of the stoic school of philosophy. The individuals behind Zero Zeno utilized our models to compose articles and comments, subsequently disseminated across various platforms, notably Instagram, Facebook, X, and affiliated websites,” it expounded.

The content disseminated by these diverse operations addressed a broad spectrum of topics, encompassing Russia’s incursion into Ukraine, the conflict in Gaza, the Indian elections, political dynamics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign entities.

OpenAI outlined its multifaceted approach to combatting misuse of its platform, including monitoring and disrupting threat actors, spanning state-affiliated entities and sophisticated, enduring threats. “We invest in technology and human resources to identify and thwart entities akin to those under discussion here, leveraging AI tools to combat abuses,” it articulated.

Collaborating with stakeholders in the AI domain, OpenAI endeavors to spotlight potential misapplications of AI and disseminate insights to the public.