The presenter stands to the right of the frame next to the projector screen. The audience sits in front.
The Michigan Institute for Data Science (MIDAS) hosts the Building Ethical and Trustworthy AI forum in the Lurie Engineering Center Tuesday. Julianne Yoon/Daily.  Buy this photo.

About 50 researchers, students and industry professionals gathered in the Lurie Engineering Center on Tuesday for a discussion about the ethics of artificial intelligence. The forum,  titled “From Theory to Practice: Building Ethical and Trustworthy AI,” was hosted by the Michigan Institute for Data Science and featured three keynote speakers as well as “lightning talks” and panel discussions.

Information Ph.D. candidate Kwame Porter Robinson spoke at the forum about creating computing infrastructures that are based on meeting the needs of communities. He said AI policy often uses a top-down approach, focusing on making changes at the government or corporate level, rather than a bottom-up approach that would start with community members and workers.

“Typically in AI ethics … there’s a preference for top-down approaches or points of intervention in terms of policy or … regulation, but there are alternatives,” Porter Robinson said. “You can begin with workers, you can begin with people that are directly affected and ask them what they think.”

Keynote speaker Jenna Wiens, associate professor of computer science and engineering, talked about the potential dangers of AI bias in health care settings. She explained that artificial intelligence programs can pick up on certain correlations — for example, that patients with pacemakers are more likely to be diagnosed with heart failure — but those correlations are not necessarily useful in making a diagnosis.

“It’s a problem because this model learned this association of having a pacemaker and being at greater risk of heart failure,” Wiens said. “If the pacemaker wasn’t there, the clinician would still diagnose the individual with heart failure. The pacemaker is not a clinically relevant radiological finding.”

One of the forum’s speakers, Elisa Ngan, assistant professor of practice in urban technology at the Taubman College of Architecture and Urban Planning, said AI bias is not going to disappear completely so it is important to consider new ways to overcome bias and create software that minimizes harm.

“I think it’s important to realize that the question of bias is not necessarily going to go away entirely,” Ngan said. “So thinking about what the real problem is and the sort of operation (that) is needed to deploy a solution and whether we need to innovate on the way that we work itself, to transition, you know, from agile development to a whole different way of working as a team is what’s necessary to build safe software.”

Merve Hickok, adjunct lecturer with the School of Information and president of the Center for AI & Digital Policy, was a keynote speaker at the forum. In an interview with The Daily after the event, she said she believes it is important to prioritize equity when setting policies on AI.

“So, where we are using (AI) for the criminal justice system or access to government benefits, access to education, access to credit, is it impacting our civil rights?” Hickock said. “Is it undermining our civil rights? Is it discriminating against certain groups? I think those spaces where there’s a higher risk of undermining the rights should be regulated first.”

According to Hickok, a key challenge to enacting AI regulations is a lack of concrete action from lawmakers and federal agencies.

“You see a lot of conversations, but not necessarily implementation,” Hickok said. “So you’re asking all federal agencies as well as lawmakers to ensure that these conversations are happening and regulations are put in place. You can talk (for) months and months about the impact and the risks of AI. However, because it’s already impacting civil rights and human rights, you should have protections in place.”

 Ngan said she believes interdisciplinary work is important in creating better AI systems and ethical frameworks.

“I feel like everyone really wants to solve this issue, but we’re all kind of working in our disciplinary silos,” Ngan said. “Trying to find a way outside of that to capture more of the context and the human problems is important to creating a system that’s actually viable in the long term and that doesn’t burden individuals who don’t have access to designing those systems — who are not lawyers, designers, engineers, but nonetheless are impacted by it.”


Summer News Editor Abigail VanderMolen can be reached at vabigail@umich.edu