GMCH STORIES

When Lies Acquire a Face: Democracy on Trial in the Age of Deepfakes

( Read 974 Times)

12 Feb 26
Share |
Print This Page

When Lies Acquire a Face: Democracy on Trial in the Age of Deepfakes

— Lalit Gargg —

In the digital age, the speed at which information travels have increased dramatically—but so too have the possibilities of confusion, deception, and misinformation. Artificial intelligence and deepfake technology have made this challenge even more complex. Falsehood is no longer conveyed merely through words; it can now be presented as truth through fabricated faces, cloned voices, and manipulated expressions. This is precisely why the central government has decided to tighten IT regulations governing deepfakes and AI-generated content. Under the new provisions coming into effect on February 20, it will be mandatory to clearly label AI-generated material, and any unlawful or misleading content must be removed or blocked within three hours. Previously, the time limit was 36 hours. This shift is not merely a technical amendment—it is a serious intervention aimed at strengthening digital ethics and democratic accountability.

In recent years, the misuse of deepfake technology has emerged in alarming ways. Fake videos of political leaders, morphed obscene images of actresses, audio clips designed to incite communal tension, and artificial messages created for financial fraud—all testify that technology is never neutral; it can be used constructively or destructively. When truth is distorted and lies are wrapped in the polished veneer of technological authenticity, an atmosphere of distrust inevitably spreads across society. In such circumstances, government intervention appears necessary, for this is not merely a question of free expression, but one of social harmony, national security, and the dignity of citizens.

Under the new rules, the responsibility of social media platforms has been significantly expanded. They must now ensure that users are informed whether the content being shared has been generated by AI. This establishes a minimum standard of transparency. The three-hour compliance window further signals that the government recognizes the gravity of digital offenses. Once a deepfake video goes viral, subsequent clarifications often prove ineffective; only swift action can limit the damage. However, it is equally true that verifying the authenticity of content within such a short timeframe is technically and administratively complex. Platforms will have to substantially strengthen their monitoring mechanisms, posing operational and financial challenges.

An equally critical question concerns who defines “objectionable” or “misleading” content—and on what basis. In a democracy, freedom of expression is a fundamental right. If the regulatory process lacks transparency and fairness, it risks misuse. In the past, attempts to regulate social media have been portrayed by some as governmental overreach. Therefore, striking a balance between regulation and liberty is essential. The government must ensure that these rules are not used to suppress dissent but are applied strictly to curb misinformation and criminal misuse. Independent oversight mechanisms, judicial review, and transparent grievance redressal systems could serve as essential safeguards in this regard.

The global landscape reflects similar concerns. Countries such as Australia and France have introduced measures including minimum age requirements for social media use by children. In the United States, historic lawsuits are underway to examine the mental health impacts of platforms like Instagram and YouTube. Major technology companies have faced allegations of designing systems that foster addiction among young users. Experts note that some adolescents experience not only psychological but even physical discomfort when separated from their phones. This crisis is not confined to developed nations; in India and other developing countries, the unchecked expansion of social media is also contributing to social and psychological distress.

The AI industry itself is navigating a moral dilemma. In an intensely competitive market, companies are racing to achieve technological superiority, often leaving safety and ethical concerns behind. Recently, the resignation of a security researcher from a major company—citing the unchecked acceleration of controversial projects—highlighted these tensions. As technological progress accelerates, regulatory frameworks risk becoming obsolete unless they are continually updated. Regulation must therefore be not only punitive but also forward-looking and participatory.

In the Indian context, the issue is particularly sensitive. The country’s digital revolution has witnessed unprecedented growth, with millions of new users coming online each year. If the distinction between authentic and fabricated content becomes blurred, democratic discourse itself may be undermined. Electoral processes, social harmony, financial transactions, and personal reputations could all be threatened by deepfakes. Thus, mandatory labeling of AI-generated content is not a mere procedural formality but a significant step toward strengthening information literacy. However, labeling will only be effective if citizens themselves are digitally literate; otherwise, content may continue to be shared uncritically.

In light of upcoming e-summits and India’s “India AI Mission,” it becomes even more pertinent that technological advancement be aligned with the values of transparency, integrity, and authenticity. If India aspires to global leadership in AI, it must also lead in establishing ethical standards. Merely increasing the number of startups and innovations is insufficient; it is equally important to ensure that AI respects human dignity, privacy, and democratic institutions. The objective of regulation is not to stifle technological progress, but to make it accountable.

Ultimately, the deepfake and AI challenge is not only technological—it is moral and social. Law is necessary, but it is not sufficient. A lasting solution will require coordinated responsibility from digital companies, transparency from governments, vigilance from the judiciary, and awareness among citizens. Balanced and impartial regulation will not suppress freedom of expression; rather, it will safeguard it—because freedom is meaningful only when anchored in truth and responsibility. As technology’s ability to transform lies into convincing realities grows stronger, so too must our collective resolve to defend truth. This is the greatest ethical test of the digital age—and the next defining benchmark for democratic societies.


Source :
This Article/News is also avaliable in following categories :
Your Comments ! Share Your Openion

You May Like