How applying cognitive diversity to LLMs could transform the user experience


As AI continues to transform, so do the experiences of the people it serves.
A McKinsey study shows that by 2025, 62% of organizations are at least experimenting with AI agents, while nearly 9 in 10 now say they use them regularly.
Dean of the School of Engineering at Manhattan University, Professor Emeritus of Engineering Design and Mechanical Engineering at Penn State University, and KAI practitioner.
While these numbers are encouraging, concerns about the technology remain constant, particularly around the quality and reliability of data and the inaccurate responses that AI tools can generate. Inaccuracy is the risk most organizations work to mitigate, according to McKinsey.
So, is there a way to improve the yield of LLMs and get the answers and information we want and in the way we need? Currently the answer is to tell users to simply improve their prompts, but if we look at how humans interact with each other, there might be another solution.
Introducing cognitive diversity – and why it matters for LLMs
In humans, cognitive diversity refers to differences in how individuals think, solve problems, generate ideas, and make decisions.
The KAI inventory suggests that this diversity comes in the form of a natural, innate preference for the amount of structure we use when we generate solutions, organize our environment when we implement them, and respond to group rules and norms.
The theory of adaptation and innovation, on which the KAI is based, describes a spectrum from highly adaptive to highly innovative, with infinite variations in between.
Generally speaking, more adaptive individuals prefer more structure and prefer to rely on clear, consistent rules, while more innovative people prefer less structure and are more likely to ignore or modify rules to stay committed.
A person’s preference for more adaptation or more innovation is not related to their intelligence or motivation; and for this reason, there is no ideal position on the KAI spectrum.
Decades of research by Dr. MJ Kirton on adaptation and innovation theory suggest that when individuals understand their cognitive styles, solutions can be found more effectively, more practically, and more efficiently – both alone and in teams.
But how can we apply this theory to technology and can we train LLMs to work in the same way? Research suggests the answer is “yes.”
What the research suggests:
A recent article by researchers at Carnegie Mellon University and Penn State University – Putting the ghost in the machine: emulating cognitive style in large language models – explored a fundamental question: can LLMs imitate cognitive styles if we teach them how?
The researchers taught an LLM model on adaptation-innovation theory, giving him an understanding of cognitive diversity and the behavior of more adaptive and innovative people. He was then tasked with solving three design problems using two different prompts, with each prompt specifically designed for a different cognitive style.
One prompt was worded adaptively – reflecting the thinking style of someone who is meticulous, attentive to detail, and thrives when working with clear expectations; the other prompt was worded innovatively – reflecting the thinking style of someone who is energetic when expectations are more ambiguous and there is greater flexibility.
Responses were evaluated based on their feasibility (how feasible and realistic the solutions were) and their connection to the paradigm (whether the ideas stayed within existing frameworks or deviated from them).
Results revealed that the adaptive prompt resulted in more feasible, structured, traditional solutions. In contrast, encouraging innovation has produced solutions that are less feasible but more problematic.
Simply put, the LLM was not just generating solutions or answers, it was generating the right kinds of solutions based on its knowledge of the cognitive diversity and effective cognitive style of the individual asking the question.
As a result, it provided a more innovative/adaptive solution depending on how it was requested and the needs of the requester.
But what does all this mean for the future of LLMs?
Simply put, we are wasting the power of LLMs if we do not take cognitive diversity into account. If we want to achieve better, more relevant, and more productive solutions through AI, and obtain them more efficiently, the next generation of technology must incorporate an understanding of cognitive diversity.
In real life, we rarely preface a question by explaining in detail how we think or approach problems, but we know when an answer does or does not match our way of thinking – and if it’s the type of answer we’re looking for. If LLMs can offer us the same range of possible answers as the cognitive style spectrum represents, it will eliminate the endless cycle of prompts until we land on the answer we need.
Research shows that by embedding an understanding of human cognitive styles into the technology itself, we give ourselves and our AI tools a head start. From there, the opportunities to achieve even better productivity, efficiency, and user satisfaction rates could skyrocket.
We offer the best Large Language Models (LLM) for coding.
