Finding A Faster, Cheaper Way To Curb Misinfo

The spread of misinformation and disinformation is the most severe short-term risk facing the world, according to the World Economic Forum’s Global Risks Report 2024.

It has been allegedly weaponised by Israel, accused of using fake social media accounts to lobby US lawmakers for more military funding. Misinformation and disinformation also fuelled the start of the Russo-Ukrainian war and remains a potent tool used by Moscow.

During the COVID-19 pandemic, misinformation about vaccines contributed to widespread refusal.

Between May 30, 2021, and September 3, 2022, the United States recorded over 230,000 avoidable deaths due to non-vaccination — more than the combined US military combat fatalities in World War One, the Korean War, the Vietnam War, the Gulf War and the “war on terror”.

Misinformation and disinformation is increasingly pervasive, but the issue can be tackled cost-effectively without compromising freedom of speech.

The difference between misinformation and disinformation is intent.

The term ‘disinformation’ refers to information that was deliberately designed to deceive, whereas ‘misinformation’ is agnostic as to the intent of the sender.

A recent poll revealed that 71 percent of the 3,000 US adults surveyed support limiting false information on social media, particularly regarding elections.

However, the most popular method for doing this — fact-checking — is too slow and costly to be effective. It requires certainty before taking actions such as removing or making less visible a social media post deemed false.

But there is another way this problem can be addressed without infringing on freedom of speech or turning social media companies into the arbiters of truth.

Responsibility can be shifted to social media users through a process called ‘self-certification’.

When a user attempts to share a post, its likely veracity is assessed — either by volunteers, or more realistically, by AI agents such as large language models.

If the post is flagged as potentially false, the user is asked to certify that they believe it to be true before being allowed to share it.

If they choose to self-certify the post, it is shared immediately.

This system allows users to share any post they believe to be true and only requires certification for posts with questionable accuracy. Obviously true posts do not need to be certified.

Most people are fundamentally honest and don’t want to spread false information. Or at least they are not willing to lie to do so.

This simple nudge — asking users to certify the truthfulness of potentially false posts — has proven highly effective.

A recent study found that self-certification reduced the sharing of false information by about half.

Without the intervention, 49 per cent of the false posts were shared but with the intervention, only 25 per cent were shared.

Further analysis showed that users were willing to share posts they had indicated were false, but most were not willing to lie and indicate that a post they believed was false was true, just so that they would be allowed to share it.

This is why self-certification was so effective.

Since users are not stopped from sharing content they think is true, the occasional false prompt is acceptable. This allows for automation, leading to faster and cheaper solutions.

Harmful content isn’t just limited to outright lies: it includes exaggerated information, content taken out of context and hate speech.

Self-certification could be adapted to address these issues.

Content that is potentially exaggerated, taken out of context or is otherwise inappropriate can be flagged to the user, with the user being required to certify that it is not exaggerated, taken out of context or is otherwise inappropriate before being allowed to share it.

Despite growing concerns over misinformation and disinformation, self-certification offers a straightforward and cost-effective way to curb its spread.

It leverages existing technology, empowers users, and preserves freedom of speech by ensuring that people can share any information they genuinely believe to be true.

(Originally published under Creative Commons by 360info™. Read the original article here)

Recent Posts

  • Featured

Zohran Mamdani’s Last Name Reflects Eons Of Migration And Cultural Exchange

Zohran Mamdani, the 34-year-old New York State Assembly member and democratic socialist, was elected New York City’s mayor on Nov.…

3 hours ago
  • Featured

What Makes The Indian Women’s Cricket World Cup Win Epochal

For fans and followers of women’s cricket, November 2 – the day the ICC World Cup finals were held in…

9 hours ago
  • Featured

Dealing With Discrimination In India’s Pvt Unis

Caste-based reservation is back on India’s political landscape. Some national political parties are clamouring for quotas for students seeking entry…

11 hours ago
  • Featured

‘PM Modi Wants Youth Busy Making Reels, Not Asking Questions’

In an election rally in Bihar's Aurangabad on November 4, Congress leader Rahul Gandhi launched a blistering assault on Prime…

1 day ago
  • Featured

How Warming Temperature & Humidity Expand Dengue’s Reach

Dengue is no longer confined to tropical climates and is expanding to other regions. Latest research shows that as global…

1 day ago
  • Featured

India’s Tryst With Strategic Experimentation

On Monday, Prime Minister Narendra Modi launched a Rs 1 lakh crore (US $1.13 billion) Research, Development and Innovation fund…

1 day ago

This website uses cookies.