According to Fortune, Chanel’s CEO Leena Nair discovered during a visit to Microsoft’s Seattle headquarters that OpenAI’s ChatGPT generated images of her leadership team as “all men in suits” when prompted to show “a senior leadership team from Chanel visiting Microsoft.” This occurred despite Chanel having 76% female employees and Nair being only the second female global CEO in the brand’s 114-year history. The incident happened during Nair’s Silicon Valley tour last October, which included visits to Google and other tech firms as part of Chanel’s AI investment push, including their 2021 Lipscanner app that allows virtual lipstick try-ons. OpenAI acknowledged bias remains “a significant issue” in AI that the industry is addressing, while Fortune’s own test with the same prompt generated an image of five women and three men, all appearing white. This revelation highlights the persistent gender bias challenges facing AI adoption in luxury and beyond.
Table of Contents
The Systemic Nature of AI Bias
What Nair encountered isn’t an isolated incident but reflects deeply embedded systemic biases in training data and model architecture. The problem extends beyond gender to encompass racial, cultural, and linguistic discrimination. A 2023 UCLA study demonstrated how ChatGPT and similar models use fundamentally different language when describing male versus female candidates, reinforcing traditional gender stereotypes. More recently, UC Berkeley research revealed these models respond with stereotyping and demeaning content when encountering nonstandard English dialects. The core issue lies in the training data—these models learn from internet-scale text and images that reflect historical biases and underrepresentation.
Luxury’s AI Conundrum
For luxury brands like Chanel, this bias problem presents a particularly acute challenge. The luxury sector depends on creating exclusive, aspirational experiences while maintaining brand integrity. When AI systems misrepresent leadership or customer demographics, they undermine the very identity these companies have carefully cultivated. Chanel’s situation is especially ironic given its foundation by Gabrielle Chanel, a pioneering female entrepreneur who built her brand around women’s liberation and empowerment. The disconnect between Chanel’s actual workforce composition—76% female, with Nair increasing female managers from 38% to over 60% since 2021—and AI’s perception reveals how technology can perpetuate outdated stereotypes even when reality has moved forward.
The Real Business Risks
Beyond the obvious public relations concerns, AI bias creates tangible operational risks for companies implementing these technologies. When AI hallucinations and biases influence decision-making, product development, or customer interactions, they can lead to costly mistakes and brand damage. For Chanel’s beauty division, which relies on accurate representation for virtual try-ons and product recommendations, biased AI could mean developing products that don’t meet diverse customer needs or creating marketing that alienates their core demographic. With 96% of Chanel’s clientele being women, the stakes for getting AI right are particularly high.
Industry Response and Corrective Measures
The response from both tech companies and users like Chanel reveals an industry grappling with how to address these challenges. OpenAI’s acknowledgment that they’re “continuously iterating on our models to reduce bias” reflects the ongoing nature of this work. Meanwhile, Chanel’s partnership with California Institute of the Arts to create an AI-focused arts center represents the type of proactive approach needed to build more inclusive technology. Nair’s call for “integrating a humanistic way of thinking in AI” points toward the necessary shift from purely technical solutions to ethical frameworks that prioritize diversity and representation throughout the development process.
The Path Forward for Responsible AI
As companies increasingly integrate generative AI into their operations, the Chanel incident serves as a crucial reminder that technology adoption requires careful vetting and ongoing oversight. The solution isn’t abandoning AI but implementing robust testing, diverse training data, and human oversight mechanisms. For luxury brands specifically, this means developing AI systems that understand and respect their unique brand heritage and customer relationships. As Nair’s experience demonstrates, even advanced AI systems can fail to capture the nuanced reality of modern corporate leadership, making human judgment and intervention essential components of any AI implementation strategy.