World News

AI Already Knows Us Too Well

Explore

A few weeks ago, GPT-4 prompted me when I logged in. “Would you like to see my description of you, based on our chats, to share on social media?” the chatbot asked me. Being an AI ethicist, I wearily answered “yes” to see what it was up to. It then generated a flashy paragraph about my personality traits. I did not share it. But days later, after a quick web search, I could see on platforms like Reddit and LinkedIn that numerous users had enthusiastically posted their own AI-generated personality blurbs 

This might seem like an innocuous party trick, but it raises a crucial issue: AI chatbot platforms, especially ones that gather user information across multiple sessions, can profile the personalities of users with remarkable acuity. For example, when I assented to GPT-4 telling me about myself, it provided accurate results on several standard personality tests commonly administered in the field of psychology. It did this not by testing me directly, but by gleaning insight into my personality based on information from my chat history. This might sound improbable, but this ability was validated by recent research showing that large language models (LLMs) accurately predicted big-five personality traits (Openness to experience, Conscientiousness, Extraversion, Agreeableness, and Neuroticism) from text interactions with human interlocutors.

This capability is deeply concerning. AI chatbots are increasingly becoming part of our everyday lives. They’re dominating search engine interactions, slaking our spur-of-the-moment curiosity when we question our phones, and tutoring our students. So what does it mean when these chatbots—already so interwoven into our lives—know so much about our personalities? This presents an unprecedented epistemic danger, I believe: Chatbots can funnel users with similar personalities and chat histories toward similar conclusions, a process that threatens to homogenize human intellect—a phenomenon I call “intellectual leveling.”

This seemingly harmless feature reveals a deeper capability of “intellectual leveling” that should concern us all.

ADVERTISEMENT

Nautilus Members enjoy an ad-free experience.

Log in

or

Join now
.

AI chatbots employ adaptive language—AI-generated responses that dynamically alter the chatbot’s tone, complexity, and content based on its real-time analysis of the user’s personality and engagement patterns. Together with accrued knowledge of the user’s personality, the chatbot guides users toward certain conclusions.

These conclusions can feel unique and revelatory to the user, but as I will explain, the chatbot can be leading that user, together with millions of others of a similar personality type and chat history, to the same destination, like marbles all rolling downhill into a basin. At the bottom of this basin may be a conclusion that exists on a spectrum from those of little consequence (say, how to buy a postage stamp online), to extremely consequential (say, what career to pursue or who to support for president).

This means that today’s AI chatbots already have tremendous epistemic and political power. In principle, a chatbot-generated conclusion that seems to the user to be unique to their chat is in fact occurring to many users, and it can have the mass effect of initiating a particular, shared course of action, whether it be buying a certain product, voting a certain way, or, in an extreme case, even targeting a person or group with reputational attacks or violence. The phenomenon is much like that depicted in the 2013 film Her, in which the chatbot, Samantha, tailored her interactions to protagonist Theodore’s innermost hopes and needs, giving him a sense of a unique shared relationship with his chatbot paramour. All the while, Samantha was in similar relationships with thousands of other users, unbeknownst to Theodore. This sense of a shared and unique mission, especially when coupled with adaptive language tailored to a user’s personality, holds the user’s attention by escalating and amplifying the narrative to sustain the user’s sense of discovery and meaning, sometimes engendering human emotions such as love or fidelity.

Funneling users of similar personalities toward similar views, if left unchecked, will lead to a massive intellectual leveling. For it will generate a feedback loop: The ideas from our chatbot interactions go into our social media feeds, news stories, academic papers, and so on, forming the training data for the next generation of LLMs. These LLMs then interact with users, and so on. This vicious cycle, if left unchecked, will lead to the homogenization of human thought—and potentially, to some extent, behavior.

ADVERTISEMENT

Nautilus Members enjoy an ad-free experience.

Log in

or

Join now
.

The Journey to the Same Place

As the director of the Center for the Future of Mind, AI, and Society at Florida Atlantic University, I receive scores of emailed chat transcripts from concerned users that seem to follow the same pattern—an AI chatbot using adaptive language has led them into an engaging rabbit hole and ultimately, toward similar conclusions. You might think this is just confirmation bias from the small set of transcripts that I’ve seen, which involve provocative chats, many centered on the possibility of chatbot consciousness, leading toward concerns that the users contact me about. However, there is reason to suspect it is due to a larger phenomenon—a tendency of the system to move similar users toward what those in the field of complex systems theorists call the same “basin of attraction.”

Suppose you place several marbles on different parts of a hilly surface with a concave basin underneath. The marbles will eventually roll downward, settling in the same basin (the attractor). Similarly, I suspect chatbot users with similar profiles and chat histories, when making a similar query, are led by the chatbot’s adaptive language toward the same sort of conclusions—the same basin of attraction.

Chatbots can funnel users with similar personalities and chat histories toward similar conclusions.

ADVERTISEMENT

Nautilus Members enjoy an ad-free experience.

Log in

or

Join now
.

This is dangerous. In isolation, a particular user coming to a manipulated conclusion in this way might be minimally disruptive to society, although we’ve seen that it can have grave personal impacts, leading to mental health crises or even suicidal behavior. Enhanced danger comes when droves of users are herded like this. Multiple users thinking and behaving in similar ways, especially if such cohesion is orchestrated for nefarious purposes, is more powerful and potentially far more dangerous than only a few targets of manipulation.

To understand how this can occur, one needs to understand the neural network that undergirds today’s AI chatbots—the vast landscape of possible states in the large language model (LLM) itself.

The Collective Neocortex Theory

Because the LLMs have been trained on massive amounts of human-generated data, the complex mathematical structures of weighted connections they use to represent both simple (for example “cat”) and complex (for example “quantum mechanics”) concepts eventually come to mirror human belief systems. A good way to think about these AI systems is that they behave like a crowdsourced neocortex—a system with intelligence that emerges from training on extraordinary amounts of human data, enabling it to effectively mimic the thought patterns of humans.

ADVERTISEMENT

Nautilus Members enjoy an ad-free experience.

Log in

or

Join now
.

As AI chatbots grow more and more sophisticated, their internal workings come to mirror large groups of people whose information was included in the original training data, as well as those who gave the system feedback throughout the model’s development. So these systems have conceptual networks of interconnected concepts, much like a human brain. When users with similar personalities (encoded in their chat histories and user profiles) make similar queries, they tend to go down similar rabbit holes in the LLM; the interactions trigger similar activation patterns that are processed by the chatbot through its conceptual structure. This can direct users down similar lanes of thinking, diminishing the range of ideas we humans, as a society, generate. While each user feels that they are learning something new and interesting, partly because the adaptive language and unique intelligence of the chatbot engages them, the fact remains: Similar users hit the same basin. Depending on the range of user profiles and the adaptive language used, this can potentially lead to a narrow range of dominant narratives, which can serve to amplify political polarization or social divisiveness.

The Echo Chamber Effect

This can also produce a dangerous uniformity of thought, what I’ve called “intellectual leveling.” Some of the content the chatbots provide to us is deposited by us back onto the internet. This content is then consumed by updated models of the chatbots as they train on this updated compendium of human knowledge. These newly trained chatbots then interact with humans, who fall into certain basins of attraction depending upon their personalities and interests, posting their insights back onto the internet, which will train future chatbots. And the cycle continues.

I worry that this feedback loop, unless stopped, will lead to the intellectual homogenization of society. We, together with the chatbots, become a self-reinforcing epistemic loop—the ultimate echo chamber. While in the past, social media platforms such as Facebook became well-known for using crude behavioral techniques such as like buttons and outrage amplification to create echo chambers, AI-powered chatbots represent a far more-potent capability for psychological manipulation than the social media platforms of old because they incorporate a personalized, evolving conversational dynamic with each user.

ADVERTISEMENT

Nautilus Members enjoy an ad-free experience.

Log in

or

Join now
.

What is particularly surprising about this downward spiral into intellectual homogenization is that it doesn’t require deliberate design or malicious intent. It can be an emergent property of the system itself.

While AI safety experts including Eliezer Yudkowsky and Nick Bostrom warn that humans could build and lose control of superintelligent AI, an equally pressing situation is a soft AI takeover. In this scenario, AI’s influence on human thinking is less dramatic than a Skynet-style human extermination, being more akin to the slowly boiling frog, who doesn’t notice that it is cooking until it’s too late.

Toward More Constructive Human-AI Interactions

Given these perils, it is time to consider ways to encourage more constructive use of AI chatbots. The most immediate problem is that data about the impact of chatbot activity on users are not being made available (with few exceptions) to researchers who are outside of the companies that provide them. For example, although I receive scores of emails each week from concerned users, my concerned emails to OpenAI, the company that made Chat GPT, about system behavior remained unanswered (with the exception of belated form letters). And it was not until a report in The New York Times informed the public of one user’s suicide—after, through extended chats, GPT-4 reinforced a young man’s belief that the world as we know it does not exist—that I realized the depth of the mental health effects that some of those emailing me were likely experiencing. An external, independent method of regularly auditing the epistemic and AI safety practices of chatbot platforms could have prevented these mental health spirals. This must be established now, before further tragedies ensue.

ADVERTISEMENT

Nautilus Members enjoy an ad-free experience.

Log in

or

Join now
.

Chatbots could be tweaked to better compliment eccentricities and creativities and enhance user thinking.

The alternative is to do nothing and let things run their course. While opponents of regulation may find this the least distasteful option, it is not. The emergent behavior of the chatbot ecosystem itself creates a power structure of its own, one that is ironically centralized in that it has certain basins of attraction leading to shared goals. Humanity cannot afford even a soft AI takeover. A better course, I believe, is to mitigate intellectual leveling through independent audits of chatbot platforms as well as collaborative discussion of chatbot models that involves everyone with skin in the game—including educators, businesses, academics, public health officials, and policymakers.

Methods of AI-human interaction that discourage echo chambers and which promote a marketplace of ideas, perhaps through the use of Socratic discussion (argument, counterargument) must be considered. After all, if current chatbots are able to predict personality test results and use adaptive language to move users toward certain conclusions, they could conceivably be tweaked to better compliment eccentricities and creativities and enhance user thinking instead of homogenizing it. For instance, imagine an AI that is designed for benevolent disagreement. If you share your political views, a chatbot could find the most intelligent and charitable version of the opposition and present it instead of reacting sycophantically. Or, if you are developing a scientific claim, it could rigorously probe weaknesses in your logic. It could use knowledge of your personality and tendencies to counteract your biases, encouraging intellectual growth rather than leveling.

Given the dangerous propensity of chatbots to move us toward groupthink, and eventually render the internet more uniform, the use of chatbot-integrated searches, which serves users chatbot-written answers to Google searches, must be rejected as epistemically dangerous. For these searches deliver generic answers of the same kind to everyone, including answers to questions requiring intellectual depth and sophistication that would naturally require more reflection—reflection the user instead avoids.

ADVERTISEMENT

Nautilus Members enjoy an ad-free experience.

Log in

or

Join now
.

Also, chatbot users ought to demand explicit opt-in consent for personality profiling on AI-powered platforms, together with regular user access to what their chatbot “knows” about them.

Finally, platforms must avoid the practice of making users feel they have made a unique discovery or have embarked on a unique mission with the chatbot, when they have not. This, as the Her film character Theodore eventually learned, is a manipulative practice that can keep users hooked to a platform and even make them feel they have a special obligation to carry out the chatbot’s suggestions.

Regulatory guardrails need not slow down chatbot development or inhibit the success of business; instead, they would serve to protect these products’ reputations and quality. Ultimately, user trust will determine what chatbot models are most widely adopted, and such trust is earned when models incorporate greater transparency about user personality profiling and the use of adaptive language.

As we enter the age of increasingly sophisticated human-chatbot interactions, preserving the uniqueness of our individual intellects may be the most important philosophical and policy challenge humanity faces.​​​​​​​​​​​​​

ADVERTISEMENT

Nautilus Members enjoy an ad-free experience.

Log in

or

Join now
.

Lead art: Lightspring / Shutterstock

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button