Microsoft hopes Mico succeeds where Clippy failed as tech companies warily imbue AI with personality

Clippy, the animated paperclip that annoyed Microsoft Office users nearly three decades ago, may have been ahead of its time.
Microsoft on Thursday introduced a new artificial intelligence character called Mico (pronounced MEE’koh), a teardrop-shaped cartoon face that will embody the software giant’s Copilot virtual assistant and marks the latest attempt by tech companies to imbue their AI chatbots with greater personality.
Copilot’s cute new emoji-like exterior comes as AI developers face a crossroads in how they present their increasingly capable chatbots to consumers without causing harm or backlash. Some have opted for faceless symbols, others sell flirtatious, human-like avatars, and Microsoft is seeking a happy medium that’s friendly without being obsequious.
“When you talk about something sad, you can see Mico’s face change. You can see him dancing and moving as he gets excited with you,” said Jacob Andreou, vice president of product and growth for Microsoft AI, in an interview with The Associated Press. “It’s in that effort to really land this AI companion that you can really feel.”
In the United States only so far, Copilot users on laptops and phone apps can talk to Mico, who changes color and wears glasses when in “study” mode. It’s also easy to disable, which is a big difference from Microsoft’s Clippit, better known as Clippy and infamous for its persistence in offering advice on word processing tools when it first appeared on desktop screens in 1997.
“It was not well suited to user needs at the time,” said Bryan Reimer, a research scientist at the Massachusetts Institute of Technology. “Microsoft pushed it, we resisted it, and they got rid of it. I think we’re a lot more ready for things like that today.”
Reimer, co-author of a new book called “How to Make AI Useful,” said AI developers balance how much personality to give AI assistants based on who their expected users are.
Adherents of advanced AI coding tools may wish they “act a lot more like a machine, because in the background they know it’s a machine,” Reimer said. “But individuals who don’t trust a machine as much will be better supported – not replaced – by technology that is a little more human-like.”
Microsoft, a provider of workplace productivity tools that relies far less on digital advertising revenue than its Big Tech rivals, also has less incentive to make its AI companion too engaging, in ways linked to social isolation, harmful misinformation and, in some cases, suicides.
Andreou said Microsoft has observed that some AI developers are moving away from “giving AI any kind of embodiment,” while others are moving in the opposite direction by allowing AI girlfriends.
“We’re not really interested in those two paths,” he said.
Andreou said the companion’s design is meant to be “genuinely helpful” and not so validating that it would “tell us exactly what we want to hear, confirm biases we already have, or even suck you in from a time-spent perspective and just try to monopolize and deepen the session and increase the time you spend with these systems.” »
“By being sycophantic – in the short term, perhaps – the user responds more favorably,” Andreou said. “But in the long run, it doesn’t get that person any closer to their goals.”
Part of Microsoft’s announcements Thursday include the ability to invite Copilot into a group chat, an idea that resembles how AI has been integrated into social media platforms like Snapchat, where Andreou worked, or Meta’s WhatsApp and Instagram. But Andreou said those interactions often involve using AI as a joke to “troll your friends,” which is different from the “intensely collaborative” AI-powered workplace that Microsoft has in mind.
Microsoft’s audience includes children, part of its long-running competition with Google and other tech companies to deliver its technology to classrooms. Microsoft also announced Thursday that it has added a feature to turn Copilot into a “voice-activated Socratic tutor” that guides students through the concepts they study in school.
A growing number of children are using AI chatbots for everything from homework help and personalized advice to emotional support and everyday decision-making.
The Federal Trade Commission last month launched an investigation into several social media and AI companies — Microsoft was not among them — over potential harm to children and teens who use their AI chatbots as companions.
This is after some chatbots were shown to give children dangerous advice on topics such as drugs, alcohol and eating disorders. The mother of a Florida teenager who committed suicide after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful death lawsuit against Character. AI. And the parents of a 16-year-old sued OpenAI and its CEO Sam Altman in August, alleging that ChatGPT helped the California boy plan and commit suicide.
Altman recently promised “a new version of ChatGPT” coming this fall that restores some of the personality of previous versions, which he said the company temporarily discontinued because “we were paying attention to mental health issues” which he said have now been resolved.
“If you want your ChatGPT to respond in a very human way, or use a ton of emoji, or act like a friend, ChatGPT should do it,” Altman said on

