‘The problem isn’t just Siri or Alexa’: AI assistants tend to be feminine, entrenching harmful gender stereotypes


In 2024, artificial intelligence (AI) voice assistants worldwide have exceeded 8 billionmore than one per person on the planet. These assistants are helpful, polite – and are almost always female by default.
Their names also carry gendered connotations. For example, Apple’s Siri – a Scandinavian feminine name – means “beautiful woman who leads you to victory“.
This is not a harmless brand image: it is a design choice that reinforces existing stereotypes on the roles that women and men play in society.
Nor is it merely symbolic. These choices have concrete consequences, normalizing gender subordination and risking abuse.
The dark side of “friendly” AI
Recent research reveals the extent of harmful interactions with feminized AI.
A 2025 study found that 50% human-machine exchanges were verbally abusive.
Another study as of 2020, this figure was between 10% and 44%, with conversations often containing sexually explicit language.
However, the sector is not committing to systemic change, and many developers today are still turning to pre-coded answers to verbal violence. For example: “Hmm, I’m not sure what you meant by that question.”
These patterns raise real concerns about the repercussions of such behaviors on social relationships.
Gender is at the heart of the problem.
A 2023 experience showed that 18% of user interactions with a female agent were focused on gender, compared to 10% for a male incarnation and only 2% for a gender-neutral robot.
These figures may underestimate the problem, given the difficulty of detecting suggestive speech. In some cases, the numbers are staggering. Brazilian bank Bradesco announced that its feminized chatbot had received 95,000 messages of sexual harassment in a single year.
What is even more worrying is the speed with which abuse is increasing.
Microsoft’s Tay chatbotposted on Twitter during its testing phase in 2016, lasted only 16 hours before users trained him to spew racist and misogynistic slurs.
In Korea, Luda was manipulated into fulfilling sexual demands as an obedient “sex slave.” However, for some in the Online Korean Communityit was a “victimless crime.”
In reality, the design choices behind these technologies – female voices, deferential responses, playful deflections – create a permissive environment for sexist aggression.
These interactions reflect and reinforce real-world misogyny, teaching users that commanding, insulting, and sexualizing it is acceptable. When abuse becomes commonplace in digital spaces, we need to seriously consider the risk that it will spill over into offline behavior.
Ignoring concerns about gender bias
The regulations are I’m having trouble keeping up with the growth of this problem. Gender discrimination is rarely considered a high risk and is often considered fixable through design.
While the European Union AI Law requires risk assessments for high-risk uses and forbidden systems considered an “unacceptable risk”, the majority of AI assistants will not be considered “high risk”.
Gender stereotyping or the normalization of verbal abuse or harassment does not meet current standards for prohibited AI under the European Union’s AI law. Edge cases, like voice assistant technologies that distort the behavior of a person and promote dangerous behavior would for example be provided for by law and would be prohibited.
Even though Canada imposes gender impact assessments for government systems, the private sector is not covered.
These are important steps. But they remain limited and also constitute rare exceptions to the norm.
Most jurisdictions do not have rules addressing gender stereotypes in AI design or their consequences. Where regulations exist, they prioritize transparency and accountability, overshadowing (or simply ignoring) concerns about gender bias.
In Australia, the government has reported it will build on existing frameworks rather than developing AI-specific rules.
This regulatory gap is important because AI is not static. Every sexist order, every abusive interaction fuels the systems that shape future outcomes. Without intervention, we risk inscribing human misogyny into the digital infrastructure of everyday life.
Not all assistive technologies – even those of a female gender – are harmful. They can enable, educate and advance women’s rights. In Kenyafor example, sexual and reproductive health chatbots have improved young people’s access to information compared to traditional tools.
The challenge is to find a balance: fostering innovation while setting parameters that ensure standards are respected, rights are respected, and designers are held accountable when they are not.
A systemic problem
The problem is not just with Siri or Alexa, it is systemic.
Women put on makeup only 22% of AI professionals worldwide – and their absence from design tables means that technologies are built on narrow perspectives.
Meanwhile, a 2015 investigation of more than 200 older women in Silicon Valley, 65% of them had experienced unwanted sexual advances from a supervisor. The culture that shapes AI is profoundly unequal.
Hopeful narratives about “correcting bias” through better design or ethical guidelines ring hollow if not implemented; voluntary codes cannot dismantle well-established standards.
Legislation should recognize gender-related harms as high risk, mandate gender impact assessments and require companies to show they have minimized such harms. Penalties should apply for failure.
Regulation alone is not enough. Education, especially in the tech sector, is crucial to understanding the impact of gender flaws in voice assistants. These tools are the product of human choices and these choices perpetuate a world in which women – real or virtual – are presented as servants, submissive or silent.
This edited article is republished from The conversation under Creative Commons license. Read the original article.



