Because everyone deserves the opportunity to benefit from this technological revolution, it’s critical that we make artificial intelligence education available to the community at large. As founding director of Emory University’s new Center for AI Learning, I’ve spent countless hours beyond the campus, meeting people who might not know much about how AI works but heard it could be a game-changer. I’ve visited 22 cities across Georgia, listening to people in almost every walk of life. It’s been an education.

During my visits with communities big and small, urban and rural, I noticed a palpable anxiety about AI and its potential impacts. People worry about how AI will affect their lives and the well-being of their children. They worry that by relying on these technologies in school and our everyday activities — outsourcing our thinking, so to speak — we might lose our ability to think for ourselves. I’ve even heard stories of children reprimanding their parents for using AI, claiming it’s stealing their information. This generational divide over AI is quite striking, given that younger generations are usually fearless early adopters.

Joe Sutherland, Emory AI

Credit: Handout

icon to expand image

Credit: Handout

Many Georgians are anxious about AI taking over their jobs — a fear that classically accompanies any technological revolution, but one which is particularly acute with AI. People are concerned about their privacy being compromised by government or companies using AI, something increasingly salient on the heels of the National Public Data breach. Business owners who would otherwise be investing in these new technologies to make their operations more effective are keeping their capital on the sidelines, because they fear their investments will be regulated or become unlawful when state governments catch up. With the upcoming elections, people are worried about AI-generated disinformation showing their leaders or loved ones saying or doing things they never actually said or did.

To put it plainly, people are wondering: How will this powerful new thing we don’t completely understand affect our lives, our jobs, our kids, our privacy, our values, our vote?

Though we can allay some of these fears through public outreach and education — indeed, I’ve seen remarkable value in dispelling common misconceptions like whether AI is “conscious” (it’s not) — it is public policy that must address the bulk of the fears and concerns I heard traveling around Georgia.

Trustworthy, safe AI and diversity of thought

The laws and rules we’re now creating around AI must take people’s concerns into account, which is why I’ll be taking the lessons I’ve learned in Georgia to Washington and the new U.S. Artificial Intelligence Safety Consortium, of which I’m a member. The National Institute of Standards and Technology, housed in the U.S. Department of Commerce, built the consortium at President Joe Biden’s direction to develop trustworthy AI and reliable, widely accepted methods to measure its performance. With dozens of proposals on AI security and privacy before Congress and at least 40 states, the consortium’s task is to stand back from day-to-day politics and bring together many perspectives and forms of knowledge to address the general unease I’ve heard and seen. To do that, the consortium’s 200 members need, in addition to essential technical expertise, an intuitive feel for the public’s fears and concerns over AI, as well as public excitement over the opportunities. They’ll need to hear the concerns I heard traveling around Georgia: that safety and equal access are important, not only within the industry, but within the public.

That kind of responsiveness to public attitudes can only take place in an intellectually diverse environment. My own background in AI, along with my experience in Washington at the White House, taught me how important it is to incorporate a broad spectrum of perspectives into the consortium’s work. I was encouraged not only that two of the consortium’s five working groups deal with risk management and safety and security, but also by the wide range of backgrounds among members. Big Tech firms such as Adobe and Microsoft are part of the consortium, but they don’t dominate it. Other voices are in the room, including academics from prominent schools such as Cornell, Purdue and American University, as well as civil society nonprofits such as the Center for a New American Security and the Data & Society Research Institute. This diversity will be needed to craft the forms of governance that can balance the risks of generative AI models — like bad actors manipulating content into deepfakes — against their benefits, like improving health outcomes in areas with shortages of care providers.

Even if the more far-fetched fears never play out, we should remember that technological power is balanced by human values, appropriate incentives, and institutional checks and balances. Although we AI leaders might dismiss the fearmongering wrought by notable tech luminaries — which often, I believe, goes too far — we must also recognize that the popular fear of AI doomsday scenarios is real, even if the scenarios themselves are not.

If it listens closely to these concerns, the consortium can contribute ideas to the emerging dialogue about AI regulation that have little likelihood of emerging elsewhere such as protecting the likenesses and abilities of those who have honed their personas and crafts, or defending consumers, especially younger ones, against obscene, violent, and hateful synthetic content. The consortium can also offer guidance on safe public data sharing to ensure health applications work for every demographic, not just those for which we had the most data. I hope the consortium benchmarks the environmental burden of AI technologies as well, so we can be cognizant of that as the field advances.

Ideally, the consortium will not issue rigid rules, but a framework that will work at the federal and state levels to set priorities for and approaches to AI governance. I’ve already found a spirit of high hope that we can produce principles useful and influential enough to provide legislators with specific recommendations on the guardrails that govern the use of AI, so that everyone can confidently and comfortably use it for their own — and the greater — good.

Joe Sutherland is director of the Center for AI Learning at Emory University.