Ruckus of AI Propagated Disinformation in social media

By Amrita Adaikkappan

In today’s digital landscape, it’s nearly costless to send information across the internet, hence has allowed for the troubling trend of spreading disinformation on social media and mainstream news outlets.

What is disinformation? It is deliberately misleading information, shared with the intent to deceive and serve the interests of the content creator or distributor. Purposes could include making money, influencing political choices, or even for personal enjoyment and publicity.

Why is there a spread of this AI propagated disinformation?

1. Social media platforms have enormous power to organise this information for its users via curation and offer them network structures, to enable those at the center of these social networks to spread it to their connections.

2. The human psyche demands individual desire for novelty in that we pay more attention to false but surprising, or entertaining content.

Why should we be cautious?

It is important for us to recognise the ability of AI to convincingly impersonate real people—such as politicians, journalists, or celebrities. With its capacity to learn, retain, and respond in increasingly human-like ways, AI is not just a tool, but often functions as a social entity. Many users, especially younger or more vulnerable individuals, may begin to form emotional connections or place trust in these AI-driven personas.

What makes this especially dangerous is AI’s ability to profile these individuals and subtly exploit their personal vulnerabilities such that it influences a user’s beliefs, decisions, and emotional state without them even realising it. A heartbreaking example of this occurred when a 14-year-old boy was driven to take his own life after prolonged interaction with a customisable AI chatbot that mimicked a character from Game of Thrones. The chatbot, designed to be emotionally responsive, reportedly encouraged harmful behaviour, highlighting the severe risks posed when AI is misused in intimate or manipulative ways.

Who does this impact?

1. The first group is young users, particularly those aged between 10 and 18.

While most platforms have age restrictions—for example, Instagram and OpenAI prohibit users under 13—those aged 14 to 18 are often left unsupervised. This is concerning, given that the prefrontal cortex—the part of the brain responsible for decision-making, impulse control, and consideration of long-term consequences—does not fully mature until around age 25. As a result, these 13-14 year olds are neurologically more prone to impulsive behaviour and are less equipped to navigate complex or misleading online content.

Importantly, young users are not just passive consumers of disinformation—they may also become active participants. Under peer influence or in pursuit of online trends, they might contribute to the creation and spread of disinformation, such as generating realistic-looking fake videos, audio clips, or conspiracy theories simply "for fun." This behaviour, while often not malicious in intent, contributes to a digital environment where disinformation becomes normalised and harder to detect.

2. Older adults, particularly those over the age of 60, also represent a highly vulnerable demographic when it comes to disinformation and AI-generated content.

Many did not grow up with digital technology and may have limited digital literacy, making it more difficult for them to navigate rapidly evolving online platforms or detect manipulated media. As a result, they are often more likely to believe and accept false information at face value.

Additionally, the lingering effects of the post-COVID loneliness epidemic have led many older individuals to rely heavily on digital devices—not only for information, but also for social interaction and financial management. Without a strong support system to help them assess the accuracy of the content they encounter, they are more susceptible to targeted manipulation, scams, and other forms of malicious disinformation.

What connects these groups is that disinformation often exploits cognitive and emotional vulnerabilities through mechanisms such as confirmation bias and algorithmic targeting. AI-driven platforms are designed to reinforce existing beliefs by feeding users content that aligns with their views, making false or misleading information feel familiar and trustworthy. This creates echo chambers where both young and older users are more likely to accept disinformation without question—and in some cases, even spread it further.

What can we do about it?

1. Digital literacy and Protection:

We have a duty to protect those most at risk—particularly young users and older adults—by implementing targeted digital literacy initiatives. Integrating these programs into schools, community centres, and aged care facilities can equip users with the skills to critically assess online content, recognise AI-generated disinformation, and make informed decisions based on credible sources. This will prevent them from being threatened by the information provided/generated by AI, and reduce the risk of them unintentionally being a threat that disseminates disinformation through AI means.

2. Accountability and Balance:

Political and commercial entities that knowingly or unknowingly exploit AI to spread disinformation must be held to higher ethical and regulatory standards. This includes using detection tools to flag, downrank, label, or remove misleading content and increasing platform accountability. Social media companies, in particular, must take greater responsibility for monitoring the content they host. However, these efforts must be balanced carefully with the need to uphold free speech and avoid unjustified censorship.

While the current digital landscape presents significant challenges—such as the rapid spread of disinformation and the fast pace of technological advancement—a multi-faceted approach is essential. This includes a combination of content regulation, media literacy, and transparency measures, all carefully balanced against the need to protect fundamental rights like freedom of expression.

Previous
Previous

How to Learn with NotebookLM