Wikipedia’s Existential Threats Feel Greater Than Ever

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

In 2010, the The FBI sent Wikipedia a letter that would be intimidating for any organization.

The missive demanded that the free online encyclopedia remove the FBI logo from an entry about the agency, saying reproduction of the emblem was illegal and punishable by fines, imprisonment, “or both.” Rather than back down, a lawyer for the Wikimedia Foundation, which hosts Wikipedia, responded with a flat denial, pointing out how the FBI’s interpretation of the relevant law was incorrect and asserting that Wikipedia was “prepared to make its case in court.” It worked: the FBI dropped the case.

But this dispute presupposed a society based on the rule of law, in which a government agency would hear a legal argument in good faith rather than crush it with its power. Fast forward to the present day, and things are very different. Elon Musk dubbed the site “Wokepedia” and claimed it was controlled by far-left activists. Last fall, Tucker Carlson devoted an entire 90-minute podcast to denouncing Wikipedia as “completely dishonest and completely controlled on important issues.” And after Republican Congressmen James Comer and Nancy Mace accused Wikipedia of “manipulation of information” as part of a congressional investigation, the foundation responded with a respectful explanation of how Wikipedia works, taking a more conciliatory approach rather than debating government overreach. This pragmatic shift reflects a world in which the Trump administration picks winners and losers based on their policy preferences.

As the world’s most famous free Internet encyclopedia celebrates its 25th anniversary today, it faces many challenges. Right-wing political forces have attacked Wikipedia for its alleged liberal bias, with the conservative Heritage Foundation going so far as to say it will “identify and target” the site’s volunteer editors. AI robots have continually scraped information from Wikipedia, putting a strain on the site’s servers. Added to these problems is the struggle to replenish the project’s volunteer community, the so-called aging of Wikipedia.

Behind these threats is a worrying sense that the culture has moved away from Wikipedia’s founding ideals. Aiming for neutrality, evaluating sources, volunteering for the public good, supporting a non-commercial online project—these concepts seem at best old-fashioned and at worst unnecessary in today’s openly partisan, anarchic, anti-human phase of the Internet, where “greed is a good thing.”

There remains the possibility, however, that Wikipedia’s most influential days lie in its future, provided it recasts itself inside the melting pot.

Bernadette Meehan, the new CEO of the Wikimedia Foundation, whose resume includes positions as a foreign service officer and ambassador, is well-positioned to deal with these attacks, according to communications director Anusha Alikhan. “I think diplomacy and negotiation skills are things that will adapt well to today’s environment,” she told WIRED. But even the best diplomat would struggle to meet today’s challenges: the UK has proposed age-restricting Wikipedia as part of its Online Safety Act. In Saudi Arabia, Wikipedia editors were jailed after documenting human rights violations in the country on the platform. And the Great Firewall continues to block all versions of the site for mainland China.

Perhaps more telling is that even within the Wikipedia community, long-time contributors are concerned about Wikipedia losing relevance. In a widely circulated essay, veteran editor Christopher Henner said he feared that Wikipedia was increasingly becoming a “temple” filled with aging volunteers, content with work that no one looks at anymore.

Beyond these ongoing censorship battles, Wikipedia also struggles to explain why human labor still matters in the age of artificial intelligence. Although almost all major AI systems train on Wikipedia’s openly licensed content, the message from the tech industry since 2022 has been that human production of knowledge has been rendered irrelevant by AI. Except that’s not true. Although we’re still in the early days of the AI ​​revolution, so far it appears that AI applications work best when trained on information written and verified by humans, that which comes from human-centered editorial processes like Wikipedia’s. When an AI system recursively trains on its own AI-generated synthetic data, it is likely to suffer from model collapse.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button