Scientists Must Push AI Toward Responsible AI

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

For many researchers, it is increasingly difficult to be optimistic about the impacts of artificial intelligence.

As authoritarianism rises around the world, AI-generated “slops” are overwhelming legitimate media outlets, while AI-generated deepfakes are spreading misinformation and repeating extremist messages. AI makes warfare more precise and deadly amid intransigent conflicts. AI companies exploit people in the Global South who work as data labelers and take advantage of content creators around the world by using their work without licensing or compensation. The industry also affects an already turbulent climate with its enormous energy needs.

Meanwhile, particularly in the United States, public investment in science appears to be declining. redirected and focused on AI to the detriment of other disciplines. And big tech companies are consolidating their control over the AI ​​ecosystem. In these and other ways, AI appears to be making the situation worse.

That’s not the whole story. We must not resign ourselves to AI being harmful to humanity. None of us should accept this as inevitable, especially those in a position to influence science, government, and society. Scientists and engineers can push AI towards a beneficial path. Here’s how.

The Academy’s view on AI

A Bench study in April, revealed that 56% of AI experts (authors and AI conference presenters) predict that AI will have positive effects on society. But this optimism does not extend to the scientific community as a whole. A 2023 survey of 232 scientists by the Center for Environmental Science, Technology, and Policy Studies at Arizona State University found more concern than enthusiasm about the use of generative AI in everyday life, by a ratio of nearly three to one.

We have encountered this feeling many times. Our diverse applied work careers have brought us into contact with numerous research communities: privacy, cybersecurity, physical sciences, drug discovery, public health, public interest technology, and democratic innovation. Across all of these areas, we saw strong negative sentiment about the impacts of AI. This feeling is so palpable that we have often been asked to represent the voice of AI optimists, even though we spend most of our time writing about the need to reform AI development structures.

We understand why these audiences see AI as a destructive force, but this negativity breeds a different concern: those with the potential to guide the development of AI and control its influence on society will view it as a lost cause and will not participate in this process.

Elements of a positive vision for AI

A lot argued that turning the tide on climate action requires a clear path forward to achieve positive outcomes. Likewise, while scientists and technologists must anticipate, warn of, and help mitigate the potential harms of AI, they must also highlight the ways in which the technology can be put to good use, galvanizing public action to these ends.

There are myriad ways to harness and reshape AI to improve people’s lives, distribute rather than concentrate power, and even strengthen democratic processes. Many examples come from the scientific community and deserve to be celebrated.

Some examples: AI eliminates communication barriers between languages, including in under-resourced contexts like marginalized sign languages and indigenous African languages. It helps policymakers integrate the views of many constituents through AI-assisted deliberations and legislative engagement. Large language models can tailor individual dialogues to address climate change skepticism, delivering accurate information at a critical time. National laboratories are building basic AI models to accelerate scientific research. And across all fields of medicine and biology, machine learning solves scientific problems such as predicting protein structure to aid drug discovery, which was recognized with a Nobel Prize in 2024.

Although each of these applications is nascent and surely imperfect, they all demonstrate that AI can be used to promote the public interest. Scientists should embrace, advocate, and expand such efforts.

A call to action for scientists

In our new book, Reconfiguring democracy: how AI will transform our politics, our government and our citizenship: How AI will transform our politics, our government and our citizenshipWe outline four key actions for policymakers committed to directing AI toward the public good.

This also applies to scientists. Researchers should work to reform the AI ​​industry to be more ethical, fair and trustworthy. We must collectively develop ethical standards for research that advances and applies AI, and should use and draw attention to AI developers who adhere to these standards.

Second, we should resist harmful uses of AI by documenting negative applications of AI and highlighting inappropriate uses.

Third, we should use responsibly AI to improve society and the lives of citizens, harnessing its capabilities to help the communities they serve.

And finally, we must advocate for renovation institutions to prepare them for the impacts of AI; universities, professional societies, and democratic organizations are all vulnerable to disruption.

Scientists have a special privilege and responsibility: we are close to the technology itself and therefore well placed to influence its trajectory. We must work to create an AI-infused world we want to live in. Technology, like the historian Melvin Kranzberg observed“is neither good nor bad; nor is it neutral.” Whether the AI ​​we build is harmful or beneficial to society depends on the choices we make today. But we cannot create a positive future without a vision of what it will look like.

From the articles on your site

Related articles on the web

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also
Close
Back to top button