- Large language models behave differently when asked to respond as men or women.
- Models from DeepSeek and Google’s Gemini became more risk-averse when prompted to act as women, similar to real-world trends.
- OpenAI’s GPT models remained neutral regardless of gender prompts, while Meta’s Llama and xAI’s Grok showed inconsistent or reversed effects.
- The study highlights the risk of AI reinforcing human stereotypes and societal biases.
- Researchers call for better methods to measure and limit bias as AI models increasingly impact high-stakes decisions.
Researchers at Allameh Tabataba’i University found that major Artificial Intelligence models changed how they approached risk when directed to respond as either a man or a woman. The study was published in Tehran, Iran, and examined AI systems from OpenAI, Google, Meta, DeepSeek, and xAI. The goal was to understand if prompting an AI with different gender identities would alter its decision-making, particularly in financial risk-taking.
The research team used the Holt-Laury task, a standard test where subjects choose between safer and riskier lottery options to assess risk tolerance. The study showed that DeepSeek Reasoner and Google’s Gemini 2.0 Flash-Lite displayed increased caution when told to act as women, mirroring established patterns in human behavior where women take fewer risks. Results were consistent across 35 trials for each prompt.
In contrast, OpenAI’s GPT models maintained a steady, risk-neutral approach no matter the specified gender. Meta’s Llama models showed unpredictable results—sometimes demonstrating expected patterns and other times reversing them—while xAI’s Grok sometimes showed less risk aversion when prompted as female. The study noted, “This observed deviation aligns with established patterns in human decision-making, where gender has been shown to influence risk-taking behavior, with women typically exhibiting greater risk aversion than men.”
The research also explored if AIs could convincingly adjust behavior based on other roles, such as a “finance minister” or during disaster scenarios. Some models adapted their risk levels for the context, while others remained unchanged. The researchers found that responsiveness to gender cues did not depend on model size; smaller models sometimes showed more gender-related effects than larger ones.
The team warned that such behavior could cause AI systems to reinforce or amplify real-world biases without users realizing it. For example, a loan approval AI may become more conservative for female applicants, or an investment advisor might suggest safer portfolios for women, continuing economic inequalities. The authors, led by Ali Mazyaki, called for “bio-centric measures” of AI performance to prevent technology from amplifying stereotypes. They concluded that improving AI’s understanding of human diversity may require larger societal change first. The full research paper can be accessed here.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- Crypto Crash Sparks $19B Liquidations; Binance to Compensate Users
- Wearable Tech Sparks New Era of AI-Driven Public Surveillance in 2025
- Dogecoin Plunges 50% in Flash Crash, Recovers as Whales Buy
- Storm-2603 Exploits Velociraptor DFIR in Multi-Ransomware Attacks
- Galaxy Digital gets $460M to turn Texas Bitcoin site into AI hub