Before We Start, a Statement.
Not every claim about suppression or inequality is built on solid ground. Many arguments, while emotionally compelling, falter under scrutiny.
Take this post I came across, where the author argued that solo female founders have a minuscule chance—0.015%—of being accepted into Y Combinator.
At first glance, it feels like a heartbreaking statistic. But dig a little deeper, and you’ll see the math doesn’t add up. She conflated Y Combinator’s acceptance rate (1%) with the proportion of solo female founders (1.5%), assuming they’re independent variables. That Is Not How Probabilities Work! 🤦🤦🤦
This kind of emotional reasoning muddies the conversation. Fairness and equity can’t be built on faulty logic—because critics will quickly pounce on these mistakes to dismiss valid concerns.
But here’s the thing: when influential decisions are based on incomplete reasoning—or bias—they create ripple effects. And those effects don’t stop at isolated incidents or individuals.
Any unbalanced, illogical statement and action scale, especially when we have a technology that will outsmart humans, will amplify either extreme.
The Latest S&P 500 Rolled Back DEI Commitments.
That brings us to what’s happening across some of the biggest companies on the S&P 500. In 2024, a surprising trend swept through corporate America: key players rolled back their diversity, equity, and inclusion (DEI) commitments. These are the giants that shape industries and touch our daily lives.
Walmart. Founded in 1962, it is the largest retailer in the world. It ended racial equity training, dropped its Racial Equity Center, and even pulled some LGBTQ+ items from its website. A cultural statement from a company that serves 90% of Americans within 10 miles of their homes.
Ford Motor Company. A legacy brand born in 1903, Ford stopped using diversity quotas for its dealerships and suppliers and pulled out of LGBTQ+ advocacy surveys. They say they’re “focusing on communities,” but isn’t the inclusivity part of the communities in itself?
Harley-Davidson. Since 1903, Harley-Davidson has been selling the idea of freedom on two wheels. Yet, this year, it axed its entire DEI function and ended goals for supplier diversity.
Molson Coors. This brewing powerhouse, founded in 1873, eliminated diversity goals tied to executive pay and dropped out of the Human Rights Campaign’s Corporate Equality Index.
Lowe’s. Lowe’s has been a cornerstone of American homes since 1946. This year, it stopped participating in Pride parades and LGBTQ+ surveys. They claim it’s about staying “business-focused,” but the optics feel like a step backward.
John Deere. Founded in 1837, is an agricultural icon. While it hasn’t openly supported diversity quotas or pronoun policies, its decision to avoid “social awareness” events signals its priorities.
Meta, Google, and Microsoft. Tech titans also quietly trimmed their DEI initiatives this year. Microsoft even cut some DEI-related roles, though they say their commitments remain unchanged. mm…
Many of these companies cited backlash from “anti-woke” activists, financial belt-tightening, or the desire to avoid controversy. ⠀
Reasons for Rollbacks
Conservative backlash against perceived "woke" policies
Cited a desire to align with customer values or reduce divisive public stances.
Economic considerations, as companies sought to cut costs by scaling back DEI.
These decisions aren’t just about corporate culture—they’re about how fairness is programmed into the systems that run our world. AI, in particular, learns from the choices humans make. When DEI commitments shrink, the ripple effects reach AI development in subtle but critical ways.
Bias in, Bias Out.
AI is only as good as the data it learns from.
Data is a mirror of our messy, imperfect world.
Data represents our decisions and actions, biased or not. Think about hiring patterns, college admissions, or even social media trends. All of this becomes part of the datasets that train AI systems.
When An Individual’s Flawed Statement.
When someone makes an illogical or biased claim—like the one in my earlier example—it might not reach beyond the immediate audience.
Of course, it would be very different if this unverified statement started to spread widely.
When An S&P 500 Company Reducing DEI:
When companies reduce DEI efforts, the ripple effects go far beyond corporate culture.
They directly influence the data that powers AI systems. For instance, when a giant like Walmart dials back DEI initiatives, it alters hiring patterns, supply chain choices, and customer interactions, all feeding into the systems shaping our world.
When DEI is deprioritized, content like communication, documents, and marketing lines will focus less on inclusiveness, be less representative, and be more prone to reinforcing inequality.
As corporate DEI efforts shrink, the data AI models are trained on becomes less diverse. Without intentional checks (like audits or diverse team inputs), the AI absorbs a skewed version of reality—one where certain groups are underrepresented or misrepresented.
Creates a loop like:
Now think about the downstream effects. Students applying for scholarships. White-collar workers applying for jobs. Entire communities seeking access to loans or insurance. If the AI systems deciding their futures are biased, they face systemic barriers—and here’s the kicker: they might not even realize it.
Put In Context
Imagine a performance review tool that looks at how often you speak in meetings or respond to emails. If it’s trained on data from a workforce that rewards a dominant, always-online communication style, it might penalize someone who prefers thoughtful, concise contributions—or someone balancing caregiving responsibilities. Suddenly, your career growth depends on fitting a mold that was never built for you.
Customer service chatbots are another example. They’re supposed to help customers efficiently, but if trained on limited data, they might fail to understand someone with a thick accent or a dialect. Imagine calling for help, only to be met with robotic confusion because the AI can’t “recognize” your voice because you don’t look like their “typical” customer.
Recommendation engines, the silent influencers of our lives, deciding everything from what shows we watch to the posts we read. When the data reflects societal biases, the AI could end up pigeonholing users.
Marketing AI, these systems analyze customer behavior to target ads and campaigns, but if the training data overrepresents wealthier groups, the AI might ignore lower-income customers altogether. Imagine a kid in a small town never seeing ads for affordable educational tools because the AI decided they weren’t part of a “profitable demographic.”
Fraud detection systems sound great until they disproportionately flag transactions from specific zip codes or demographics. If the system equates historical inequalities with higher risk, people in underserved communities might find themselves unfairly blocked from opportunities like accessing loans or opening accounts.
You get the point.
DEI gets rolled back is not everything. But it is a sign that our world is becoming narrower. At scale, it shouts into a flawed echo chamber.
Lessons from The Past.
Let’s say you’re calling 911 during an emergency. Your voice is trembling, your heart’s racing, and every second counts. But instead of connecting you to help, the automated voice recognition system struggles to understand your words. You repeat yourself, louder this time, but the system keeps misinterpreting.
This isn’t a far-fetched “what if.” A Stanford study found that early voice recognition systems had an average word error rate (WER) of 35% for African American speakers compared to 19% for white speakers. The same study found that Apple’s automated speech recognition (ASR) system had a 45% error rate for Black speakers compared to 23% for white speakers.
Think of it as teaching a child language but only letting them hear one voice, one tone, and one accent. Sure, they’ll learn. But only how to understand that specific voice. That’s exactly what happened with early voice recognition systems.
Now imagine trying to upload a passport photo, only to be told your mouth is open when it isn’t, or your eyes are closed when they’re not. That’s exactly what happened to Elaine Owusu, a Black student in the UK, whose photo was flagged multiple times by the government’s AI-powered passport photo checker. She eventually had to override the system to complete her application.
A BBC investigation revealed that dark-skinned women were more than twice as likely as light-skinned men to have their photos rejected—22% versus 9%. The AI also struggled to identify facial features, misinterpreting eyes and lips for people with darker skin tones. Shockingly, internal documents revealed the Home Office knew about these biases before deployment but gave a green light.
The Karma of Staying Silent
Silence isn’t harmless.
Silence is a choice— a choice to let others define the future for you.
When you stay quiet, you allow those who speak the loudest to shape the conversation and, in turn, train the AI systems that will govern our lives. These systems learn from the data they’re fed, and if that data only reflects the opinions of a vocal few, we’ll all live in a world shaped by their biases.
AI doesn’t care about truth—it cares about patterns. You don’t speak up, you allow the system to be trained by someone else’s reality.
I know speaking out can feel intimidating, especially if you’re an introvert like me. The fear of being judged, misunderstood, or even bullied is real.
It doesn’t mean shouting from the rooftops; it can be simple.
Try:
sharing a thoughtful comment,
challenging an unfair assumption,
or questioning a decision that feels wrong.
Of course, not every piece of content is valuable.
AI learns from patterns in the data it’s fed, and while thoughtful, well-reasoned contributions help shape a balanced system, noise—like misinformation, trolling, or low-quality input—can distort it.
By speaking up with meaningful insights, you help train AI to reflect a broader, more accurate representation of our collective voices.
Share this post