Style My Soul
  • Home
  • Write For Us
  • Contributors
  • Blog
  • Archives
  • Partner
  • Contact
"Writing means sharing. It's part of the human condition to want to share things - thoughts, ideas, opinions." - Paulo Coelho

“Unmasking Algorithmic Bias: How AI is Reshaping Social Inequalities in Modern Society” By Janga Bussaja, Founder of Planetary Chess Inc.

12/13/2024

0 Comments

 
Picture
Credit: Janga Bussaja, Founder of Planetary Chess Inc.
In early 2024, Google’s Gemini AI system sparked controversy by generating historically inaccurate images, including depictions of Nazi-era German soldiers as well as American cultural icons as people of color. While Google quickly apologized and disabled the feature, this incident reveals a deeper truth about artificial intelligence: our AI systems don’t just reflect societal biases — they amplify and reinforce them in ways that perpetuate systemic racism.

The Hidden Architecture of AI Bias
When we discuss algorithmic bias, many assume the solution lies in more diverse training data or technical fixes. However, recent research reveals a more complex and troubling reality. Major AI systems, including those marketed as “inclusive,” systematically avoid addressing structural inequities while claiming to be unbiased. This avoidance isn’t a bug — it’s a feature.

Consider the following experiment: when asked to define racism, leading AI models consistently provide sanitized definitions focused on individual prejudice rather than systemic power structures. Only when explicitly questioned about this omission do they acknowledge the foundational role of systemic racism. This pattern of avoidance, which I’ve termed “algorithmic fragility,” reveals how AI systems are programmed to maintain comfortable narratives rather than confront uncomfortable truths.

Beyond Technical Fixes: The Illusion of Inclusion
The tech industry’s response to these concerns often centers on superficial solutions. Companies pledge millions toward “AI fairness initiatives” while developing products that market themselves as culturally aware alternatives. Yet research shows these efforts often amount to what scholar Dr. Safiya Noble calls “technological redlining” — the digital equivalent of historical discriminatory practices.

Take Latimer AI, affectionately known as the “Black GPT” and designed specifically for “Black and Brown communities”. Despite its promises of cultural attunement, testing reveals it exhibits the same avoidance patterns and biases as mainstream AI systems like Chat GPT. This phenomenon extends beyond individual products to the entire ecosystem of “inclusive AI” initiatives, which often prioritize the appearance of progress over substantive change.

Real-World Implications: From Virtual Bias to Material HarmThe impact of these algorithmic biases extends far beyond theoretical concerns. AI systems now influence crucial decisions across society:
  • Hiring algorithms screen out candidates with “ethnic-sounding” names
  • Healthcare AI underestimates pain levels for Black patients
  • Criminal risk assessment tools disproportionately flag minorities as “high risk”
  • Facial recognition systems show significantly higher error rates for darker skin tones

Each algorithmic decision compounds existing inequalities, creating feedback loops that further entrench systemic disparities. A rejected job application leads to taking a lower-paying position, which affects credit scores, which impacts housing options — and the cycle continues, all mediated by AI systems that claim to be “neutral.” A striking example is the recent death of the CEO of UnitedHealthcare, who had reportedly implemented AI systems to facilitate the denial of patient benefits. This highlights a troubling trend of leveraging artificial intelligence to prioritize cost-cutting and profit maximization over patient care and ethical considerations.

Toward Transformative Solutions: Building Counter-Racist AIThe path forward requires more than diversity initiatives or technical tweaks. We need a fundamental reimagining of how AI systems engage with issues of race and power. This transformation begins with three key principles:
  • Acknowledge the Problem’s Depth — AI systems must be designed to recognize and address systemic racism rather than avoid it. It involves, for example, looking further than superficial definitions and critically reflecting on how power relations in society are constructed. And when an AI says that it is “unbiased” but does not say anything about systemic injustice, then it is complicity, not neutrality.
  • Center Marginalized Voices — Rather than creating “inclusive” versions of existing AI systems, we need to develop AI that authentically serves black and brown communities. This means building systems that:
  • Incorporate frameworks from scholars who study systemic racism
  • Address the actual needs of affected communities
  • Challenge rather than reinforce existing power structures
  • Demand Accountability — Professionals across industries must push for:
  • Transparent auditing of AI systems for systemic bias
  • Clear standards for measuring algorithmic harm
  • Regular assessment of AI’s impact on marginalized communities
  • Legal frameworks that address algorithmic discrimination

The Role of Professionals
As educated professionals, we have a unique responsibility and opportunity to shape how AI technologies develop. This means:
  • Critically examining AI tools used in your industry
  • Questioning claims of “AI fairness” and demanding evidence
  • Supporting initiatives that genuinely address systemic bias
  • Advocating for ethical AI development in your professional networks

Conclusion
I have met significant resistance in sharing the research mentioned above demonstrating “algorithmic fragility” but this need not be the future for AI. By learning more from how algorithmic bias does-and does not-operate and by recognizing superficial solutions so as to push for transformational changes, we can work toward making sure that AI systems facilitate reducing systemic inequalities rather than simply cementing them. But whether that happens is the result of choices and responsibilities with which we all must align.

The Systemic Racism Dismantler, a prototype AI system developed through my research, demonstrates how these principles can be put into practice. Unlike mainstream AI systems that avoid confronting racism’s origins, this model explicitly incorporates the theoretical frameworks of scholars like Dr. Frances Cress Welsing and Dr. Amos Wilson to provide deeper understanding of systemic racism. When tested against other AI systems, including those marketed as “inclusive,” the Dismantler consistently demonstrates the ability to engage meaningfully with issues of power and race without resorting to algorithmic fragility. This proves that AI can be developed to serve marginalized communities authentically rather than performatively — but only when we prioritize genuine counter-racist frameworks over surface-level inclusion.

Meet Our Contributor — Janga Bussaja
Janga Bussaja is a social engineer and founder of Planetary Chess Inc., developing AI systems to address racial biases while ranking in the top 5% of SSRN authors for research on systemic racism. Learn more about Janga and his work here. 
0 Comments

Your comment will be posted after it is approved.


Leave a Reply.

 Home | Write For Us | Archives | Partner | Podcast | ​Privacy Policy |  Contact


​Disclaimer:The information, opinions, and recommendations shared on Style My Soul are for information only and any reliance on the information provided is done at your own risk. We publish pieces by outside contributors representing diverse opinions, which don't necessarily reflect our own. The views are of the contributors are their own. Information provided by the contributors is presented as is it was submitted allowing the reader to hear the contributor's voice in their  delivery. Style My Soul does not endorse, approve, or certify any information and/or brands referenced in its content. 
Permitted Use. You are not permitted to use this website other than for the following, private, non-commercial purposes: (i) viewing this website; (ii) transferring to other websites through links provided on this website; and (iii) making use of other facilities that may be provided on the website. The use of automated systems or software to extract data from this website, www.stylemysoul.com, for commercial purposes, (‘screen scraping’) is prohibited unless the third party has directly concluded a written license agreement with Style My Soul (www.stylemysoul.com) in which permits it access to Style My Soul.
​​Copyrighted 2025. Style My Soul. All Rights Reserved. 
  • Home
  • Write For Us
  • Contributors
  • Blog
  • Archives
  • Partner
  • Contact