How Wikipedia is fighting AI slop content

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

With the rise of AI writing tools, Wikipedia publishers had to face a content attack generated by AI filled with false information and phony quotes. Already, the community of Wikipedia Volunteers has mobilized to retaliate against AI Sloqui, which the product director of the Wikimedia Foundation, Marshall Miller, compared to a kind of “immune system” response.

“They are vigilant to ensure that the content remains neutral and reliable,” explains Miller. “As the Internet changes, as AI appears, it is the immune system that adapts to a kind of new challenge and to find how to treat it.”

A way in which wikipedians slip through the mud is with the “rapid deletion” of poorly written articles, as indicated above by 404 media. A wikipedia reviewer who expressed his support for the rule said they were “undertaken horrible drafts”. They add that fast withdrawal “would greatly help efforts to fight it and save countless hours to pick up the leashes of unwanted AI.” Another says that “lies and false references” inside AI outputs take “an incredible quantity of experienced publisher to clean”.

As a general rule, the articles reported for withdrawal from Wikipedia enter a seven -day discussion period during which the members of the community determine if the site must delete the article. The newly adopted rule will allow the administrators of Wikipedia to bypass these discussions if an article is clearly generated by AI and has not been examined by the person who submits it. This means looking for three main signs:

  • The writing directed to the user, like “here is your Wikipedia article on …”, or “I hope it helps!”
  • “Absurd” quotes, including those which have incorrect references to authors or publications.
  • Nonexistent references, such as dead links, ISBN with non -valid sums of control, or non -resolble DOIs.

These are not the only signs of Wikipedian research, however. As part of cleaning the WikiProject AI, which aims to tackle a “growing problem of content not surrounded and badly written in AI”, the publishers have drawn up a list of phrases and formatting characteristics that the items written by Chatbot generally present.

The list goes beyond calling the excessive use of EM dashes (“-“) which have become associated with AI chatbots, and even understand an overuse of certain conjunctions, such as “in addition”, as well as promotional language, as describing something as “breathtaking”. There are other formatting problems that the page advises Wikipedians to also monitor quotes and closed apostrophes instead of rights.

However, the quick deletion page of Wikipedia notes that these characteristics “should not alone serve as a single basis” to determine that something was written by AI, making it submit to deletion. Rapid suppression policy is not just For the contents of slop generated by AI, either. The online encyclopedia also allows the rapid deletion of pages that harass their subject, contain cannops or vandalism, or marry “inconsistent or charabia text”, among others.

The Wikimedia Foundation, which houses the encyclopedia but has not helped create policies for the website, has not always seen an eye with its community of volunteers on AI. In June, the Wikimedia Foundation interrupted an experience which put summaries generated by AI at the top of the articles after having faced the reaction of the community.

Despite variable point of view on AI in the Wikipedia community, the Wikimedia Foundation is not against using it as long as it leads to a precise and high quality writing.

“It’s a double -edged sword,” says Miller. “This makes people can generate quality content than higher volumes, but AI can also be a tool to help volunteers do their job, if we do it correctly and work with them to understand the good ways to apply it.” For example, Wikimedia Foundation already uses AI to help identify the revisions of articles containing vandalism, and its recently published AI strategy includes AI tools support publishers who will help them automate “repetitive tasks” and translation.

The Wikimedia Foundation is also actively developing an unpeated AI tool called to modify Check which aims to help new contributors to comply with its policies and writing directives. In the end, this could help relieve the burden of unvunk submissions generated by AI. Currently, Edit Check can remind writers to add quotes if they have written a large quantity of text without one, as well as checking their tone to ensure that writers remain neutral.

The Wikimedia Foundation also works to add a “dough check” to the tool, which will ask users who have stuck a large part of text in an article if they have really written it. The contributors have submitted several ideas to help the Wikimedia Foundation also rely on the tool, a user suggesting to ask the suspected authors to specify the quantity generated by a chatbot.

“We follow our communities on what they do and what they find productive,” explains Miller. “For the moment, our objective of using automatic learning in the publishing context is more to help people make constructive changes, and also to help people who patrol the changes pay attention to the right ones.”

Follow the subjects and the authors From this story to see more like this in your personalized home page flow and to receive updates by e-mail.


Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also
Close
Back to top button