How Black Atlanta leaders could help tackle racial disparities in AI

OpenAI named five Black leaders with Atlanta ties to a new ethics council. Here are three areas in which they could help address issues of racial bias
231211 ATLANTA, GA — From left, OpenAI CEO Sam Altman and founder and CEO of Operation HOPE John Hope Bryant speak at the HOPE Global Forums at the Hyatt Regency in downtown Atlanta on Monday, Dec. 11, 2023. 
(Bita Honarvar for The Atlanta Journal-Constitution)

Credit: Bita Honarvar

Credit: Bita Honarvar

231211 ATLANTA, GA — From left, OpenAI CEO Sam Altman and founder and CEO of Operation HOPE John Hope Bryant speak at the HOPE Global Forums at the Hyatt Regency in downtown Atlanta on Monday, Dec. 11, 2023. (Bita Honarvar for The Atlanta Journal-Constitution)

Five Black nonprofit, activist and educational leaders with strong Atlanta ties will be part of a new ethics council on artificial intelligence co-chaired by Sam Altman, CEO of OpenAI, the company behind ChatGPT.

AI is seen as the next frontier of technology. While some marvel and its ability to automate certain tasks, there are many questions about how to govern the technology and risks it poses, particularly for people of color.

Former Atlanta Mayor Andrew Young, civil rights activist and King Center CEO Bernice A. King and the presidents of Spelman College and Clark Atlanta University will join artificial intelligence leaders on the new council, according to an announcement made Monday at the HOPE Global Forum in Atlanta. John Hope Bryant, founder, chairman and CEO of Atlanta-based financial education and inclusivity nonprofit Operation HOPE, will be a co-chairman.

In a Pew Research study from last year, 37% of Americans said they were more concerned than excited about the increased use of AI in daily life, while 45% were equally concerned and excited. Those who are more concerned said they were worried about potential loss of jobs, privacy issues and misuse of the technology, among other things.

And many Americans don’t think that the views and experiences of people of color have been considered by AI creators. Nearly half of all those surveyed think the experiences of white adults have been taken into account in designing AI, but only 24% believe that Black adults’ experiences were considered.

Details on the new ethics council were sparse, but Bryant said it will not provide a legal framework around AI. The goal is rather to provide some ethical guidelines for the burgeoning technology.

231211 ATLANTA, GA — From left, OpenAI CEO Sam Altman and founder and CEO of Operation HOPE John Hope Bryant speak at the HOPE Global Forums at the Hyatt Regency in downtown Atlanta on Monday, Dec. 11, 2023. 
(Bita Honarvar for The Atlanta Journal-Constitution)

Credit: Bita Honarvar

icon to expand image

Credit: Bita Honarvar

“[T]he AI Ethics Council is designed to help ensure the inclusivity of marginalized populations and people of color during this economic revolution, helping facilitate the financial participation of challenged individuals, institutions, and communities focused on minorities,” according to a release from Operation HOPE.

Atlanta leaders will now have a seat at the table with one of the biggest AI players, though it remains to be seen what impact the council will have.

Here are three big ethical issues with regard to race and other socioeconomic status that they might weigh in on:

1. Bias in AI

ChatGPT was trained on data from the internet written by humans — news articles, Wikipedia entries, books and more. But any gender and racial bias in the texts then become part of the AI and replicated at scale. A recent study found that ChatGPT exhibited bias in programming as well as other unethical behaviors.

But the problems are not just in text bots. A 2018 MIT study by Joy Buolamwini and Timnit Gebru, two of the leading Black women scholars on AI, found that gender recognition software had the largest error rates when classifying darker-skinned females. The AI performed best on light-skinned males.

And misclassification due to AI has real-life impacts. There are at least five cases of police using facial recognition technology that resulted in arresting the wrong person, according to Marketplace. All five cases were Black men.

2. Diversity in the AI workforce

AI systems reflect the data on which they were developed. But they also reflect the thoughts and choices of those who developed them and the institutions where they were created, which is another source of bias, according to a paper by the Organization for Economic Cooperation and Development (OECD), an international intergovernmental collaborative.

“For example, if an AI system is trained on internal firm data in order to determine who to promote, but this firm has a discriminatory history of promoting only men, AI will likely replicate these policies and continue to mostly promote men unless developers notice this problem and adjust the models accordingly,” researchers wrote in the OECD paper.

Spelman College president Dr. Helene Gayle speaks to  students and faculty at Sisters Chapel on Friday, September 23, 2022 at Spelman College. (Natrice Miller/natrice.miller@ajc.com)

Credit: Natrice Miller / Natrice.Miller@ajc.com

icon to expand image

Credit: Natrice Miller / Natrice.Miller@ajc.com

But having workers with a variety of experiences and viewpoints may help mitigate this issue, and ethics council members Helene Gayle, the president of Spelman, and Clark Atlanta President George French both lead institutions that can provide that diverse workforce.

3. Disproportionate job displacement because of AI

Ethicists believe that AI will displace jobs, and that the displacement will be more widespread than in previous technology revolutions.

“The displacement up until now has been primarily in what we think of as unskilled and blue-collar labor,” said Paul Root Wolpe, director of the Center for Ethics at Emory University. “This technological revolution is going to hit white-collar labor as hard as it’s going to hit blue-collar labor.”

But in the immediate term, jobs like cashiers, janitors, cooks and retail salespeople, occupations a lot of Black people and other minorities hold, are going to be greatly impacted by automation and changing business models, according to a 2021 McKinsey study. The research estimates that 42% of the Black labor force currently hold jobs that could be subject to disruption by 2030.


The Atlanta Journal-Constitution and Report for America are partnering to add more journalists to cover topics important to our community. Please help us fund this important work at ajc.com/give