Clickbait clutter: AI is generating too much digital junk

AI-generated pieces are frequently optimized for engagement rather than accuracy or depth.
AI-generated pieces are frequently optimized for engagement rather than accuracy or depth.

Summary

  • AI-generated content is suffusing the internet with stuff it can do without. This is fast lowering the overall reliability of what one encounters. While GenAI has many uses, it’s also a menace in need of regulation.

The internet is already a dumping ground for all sorts of views and statements with dubious claims to veracity. We bemoan echo chambers that feed people with misinformation and host content that’s not just blatantly false, but also likely to whip up a frenzy. Power mongers understand how ‘mob mentality’ works online and use it to their advantage. The US under former president Donald Trump’s incessant tweeting, right up to the attack on the Capitol complex in January 2020, is a shining example of how warped the digital world had become even before the rise of generative artificial intelligence (GenAI).

Before GenAI, internet actors would use robot accounts to spew out questionable content. The phenomenon is worsening after GenAI burst upon the scene. We now contend with an AI-spewed deluge. How content is created and consumed is undergoing a shift. 

We marvel at the capability of new technologies to generate videos, write-ups and more at an unprecedented pace, but there seems to be little concern about the quality and utility of such content. The digital sphere, it seems, is increasingly littered with digital junk—clickbait videos and other posts that distract rather than inform or entertain us.

GenAI tools can create video, text and images from simple prompts. The idea is to ‘democratize’ content creation (this word in the context of AI produces a visceral reaction in me, but that can wait for a future column). You no longer need to be a skilled videographer, writer or artist to produce content; AI does the heavy lifting. 

This has led to an explosion in content, often accompanied by a drastic drop in quality. The internet is fast becoming cluttered with what can only be described as useless content: videos that promise much but deliver little, articles that ramble on without offering insights, and images that catch the eye but not one’s imagination.

The clickbait temptation is not new. Since the early days of print media, headlines have been crafted to attract readers, often at the expense of substance. GenAI has turbocharged this trend, enabling clickbait tactics to be used at scale. Such AI-generated pieces are frequently optimized for engagement rather than accuracy or depth. They leverage algorithms that prioritize clicks and views over quality. The result is a flood of content designed not to enlighten or inform, but to exploit weaknesses in human psychology as much as the algorithms of search engines and social-media platforms.

Take the realm of video content. Platforms like YouTube, TikTok and others are inundated with AI-generated videos that often start with provocative titles or thumbnail images. Once clicked, these videos frequently fail to deliver on their promise, offering superficial information stretched across several minutes to maximize engagement time and ad revenue. This wastes the viewer’s time and makes it harder to find meaningful content amid all the noise on the internet.

Speaking of titillation, OpenAI now says it’s searching for ways to ‘responsibly’ generate AI pornography (bit.ly/44DF5bO). Wired says it is unclear if OpenAI’s exploration of how to make responsible ‘NSFW’ content (‘not safe for work,’ i.e., a term for inappropriate material) envisages loosening its usage policy to permit the generation of erotic content, or, more broadly, also to allow depictions of violence.

The impact on consumer trust is palpable. If intelligent users wade through irrelevant and misleading content, trust in digital platforms begins to erode. This, in turn, can have broader implications for content creators and advertisers, who find it increasingly hard to engage sceptical and overwhelmed audiences.

Moreover, the rise of GenAI in content creation raises ethical questions. As AI tools lack the moral and ethical judgement of human creators, they can aid the spread of falsehoods, especially if guided by users with dubious intentions. If it could be done at scale, we need to ask ourselves: How much worse can it get?

On the flip side, the same technology that fuels the clickbait proliferation also holds the potential to enhance the quality of content. For instance, AI can help human creators produce more accurate and engaging content by assisting with research, coming up with creative ideas and providing real-time feedback. We need to veer towards these use cases instead of mindless videos.

To combat bad content, platform operators and developers should prioritize the development of algorithms that reward substance over sensationalism. This includes adjusting content promotion metrics from clicks to engagement metrics that reflect viewer retention and interaction quality. Plus, encouraging and promoting content verified for accuracy and depth could help elevate the standard of information on the internet. Even explicit material, for whatever it’s worth, could be improved.

Regulatory frameworks play a critical role. As GenAI evolves, so must our laws and regulations to ensure that content creation tools are used responsibly. Measures are needed to stop the spread of misinformation and protect copyrights so that creators get credit and compensation for their work.

While GenAI has the potential to revolutionize content creation, its current trajectory points towards a future cluttered with digital junk. We need government help; it is clear to me that Big Tech will regulate itself inadequately.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

MINT SPECIALS