If you were well-qualified for a mortgage but the bank turned you down, would you wonder why? What if your company had a highly sought after job opening, but only reached out to male candidates? How would you feel if your medical provider underestimated the health needs of its sickest minority patients and so did not reach out for extra care?

Now what if you learned those decisions were shaped by an AI algorithm?

AI is projected to contribute over $16 trillion to the global economy by 2030. A recent IBM Institute for Business Value (IBV) survey of global CEOs revealed that 69% of participants anticipate broad benefits of generative AI across their organization, while 75% of participants believe competitive advantage will depend on who has the most advanced solution. Just like Atlanta businesses, all are under pressure to move fast. Which is why we must act now.

Kitty Chaney Reed

Credit: contributed

icon to expand image

Credit: contributed

With the possibility that AI-driven computer systems could be used to determine who receives a loan, which job candidates are interviewed and who among us gets the best medical care, the algorithms that power them have the potential to perpetuate social and cultural biases that lead to discriminatory outcomes. This reinforces the need for responsible, ethical AI development.

Traditionally, we humans rely on agreed-upon methods to guide informed decision-making, especially for things that impact others. When this framework for examining the rational justification for judgments is overlaid with the concept of what is morally right and wrong — it’s called ethics.

With the widespread evolution and adoption of AI, we must demand that those same ethics be part of our algorithms.

What does AI ethics look like?

AI ethics is a multidisciplinary field that investigates how to optimize AI’s beneficial impact while managing risks and adverse outcomes. It explores issues like data responsibility and privacy, inclusion, moral agency, value alignment, accountability and technology misuse to understand how to build and use AI in ways that align with human ethics and expectations.

Organizations have a responsibility to use AI ethically as the technology matures because technology is neither ethical nor unethical. Rather, it’s dependent on the ecosystem around it, one that addresses different developments, deployment and use. It will include multiple stakeholders, from researchers and coders to policymakers and consumers. Those of us in the international tech industry must also collaborate with experts from diverse fields including ethics, law, social sciences, and philosophy.

Whether an AI system is assisting users with making highly sensitive decisions, or helping to automate non-sensitive tasks it should provide them with a sufficient explanation of recommendations, the data used and the reasoning behind the recommendations. Building and using AI ethically is essential. According to IBM’s 2022 Global AI Adoption Index, 4 in 5 surveyed businesses using AI cite being able to explain how their AI arrived at a decision as important to their business.

In Atlanta, Emory University, Georgia Institute of Technology and my alma mater, Clark Atlanta University, are already making positive inroads to better understand the intersection of AI and social justice. Like IBM, they understand that earning trust in artificial intelligence is not just a technological challenge, it’s a sociological challenge.

The road to responsible AI

It’s important to adopt frameworks for systemic empathy. Bias must be proactively monitored for and mitigated throughout the AI lifecycle. It can never be completely eradicated from a human-based system, but with a holistic approach that includes people, ethical practices data, and tools, it can be managed. Essential elements are:

  • Fairness. How can developers monitor whether their AI model is fair towards everybody, including historically underrepresented groups?
  • Explainability. An AI system should be explainable, particularly with respect to what went into its algorithm’s recommendations.
  • Robustness. Can the AI model protect against intentional manipulation, so it doesn’t disproportionately benefit a particular person or group? AI-powered systems must be actively defended from adversarial attacks, minimizing security risks and enabling confidence in system outcomes.
  • Transparency. Transparent AI systems share information on what data is collected, how it will be used and stored, and who has access to it. They make their purposes clear to users. Transparency reinforces trust and the best way to promote transparency is through disclosure.
  • Privacy. AI systems must prioritize and safeguard consumers’ privacy and data rights and provide explicit disclosures to users about how their personal data will be protected.

At this critical point in the adoption of AI, it is imperative that we build, deploy, regulate and use tools that minimize bias. This must apply both to an AI system’s algorithm as well as the data used to train and test the system. If not, AI-driven bias has the potential to have a long-range effect on underrepresented minorities, inadvertently perpetuating elitism within systems of information.

We should be focused on establishing industry standards and best practices for responsible AI. This will involve precision regulation - establishing rules to govern the deployment of AI. We must regularly review those policies and guidelines to address emerging ethical challenges and technological advancements. This is a process of continual adaptation and development.

Ultimately, AI’s purpose is to augment human intelligence, not replace it. The key to taking advantage of AI for enhanced business productivity and customer service is to take steps to help society not lose trust in the technology. Together, we can help work towards an ethical and equitable AI future for all.

Kitty Chaney Reed is the chief leadership, culture and inclusion officer at IBM.