Advertisement

AI Experts Discuss the Relationship Between AI and its Users at Radcliffe Symposium

{shortcode-7c5f530d8d477dbc24af49e9ec564eec2c6151d7}

Experts on artificial intelligence discussed the future of AI, its ethical implications, and its practical applications at a virtual symposium hosted by Harvard’s Radcliffe Institute for Advanced Study on Friday.

The symposium, titled “Decoding AI: The Science, Politics, Applications, and Ethics of Artificial Intelligence,” was divided into four speaker sessions.

In the first session, Fernanda Viégas, a professor of computer science at Harvard, unpacked what machine learning means, explaining that machine learning eliminates the need for rule-based programming. Programmers instead program a goal, known as an “objective function.”

“The system creates its own sets of rules to understand that space,” Viégas said. “This is really great because what it does is that it unlocks the possibility of us trying to solve problems for which we don’t know the rules.”

Advertisement

Viégas said machine learning can be useful in fields such as medicine, where AI can diagnose diseases based on large amounts of data.

However, Viégas said the use of AI can pose challenges with regards to how data is used. She cited the example of an AI used to diagnose patients with retinal disease which unexpectedly used the same health information to predict patients’ cardiovascular risk factors.

This dynamic puzzled physicians who did not understand how the system had determined that information. The example is indicative of the need for greater “transparency and explainability” in machine learning, Viégas said.

“One of the reasons why everybody tends to talk about AI interpretability and explainability is because you want to make sure that these machines are not doing anything incorrectly or not causing harm,” Viégas said.

Viégas also recommended that developers work with a diverse range of users and stakeholders when designing their products.

“If this is something that is going to impact communities, you have to take their perspective into account,” she said. “This is incredibly important to be including their perspective in the development — not only afterwards, but in the development of your technology.”

In the third session, Rana el Kaliouby, an executive fellow at the Harvard Business School, discussed the connection between AI and people. She said a primary goal of future AI development is bringing emotional intelligence to our machines and technology.

“It’s important that these technologies, and these AI systems have IQ, but it’s also important that we take a very human-centric approach to build empathy and emotional intelligence into these machines,” El Kaliouby explained.

Developers can attempt to “humanize” their technology by incorporating non-verbal forms of communication, such as facial expressions, posture, and vocal intonation, that they use to communicate their emotions, according to El Kaliouby.

“If you combine all of these nonverbal signals, that gives you a really good picture of the emotional and cognitive state of a person, and you’re then able to leverage this information to make all sorts of decisions,” El Kaliouby added. “Humans do that all the time to build empathy, and to build trust, which is needed in our human machine interfaces.”

The automotive industry seeks to leverage this technology in smart cars that evaluate whether an individual is driving while distracted, El Kaliouby said.

El Kaliouby said the advanced technologies of current AI enable developers to attain a more complete understanding of a driver’s mental state.

“We can understand combined face detection with body key point detection,” she said. “We tie all of that with an understanding of the emotional state of the individuals in the vehicle.”

Ultimately, Viégas said, future developers need to invest in ensuring that the technology being designed is controlled by the user, rather than the other way around.

“We can learn from these systems and we should be controlling them,” Viégas said. “They shouldn’t control us, we should be controlling them.”

—Staff writer Christie K. Choi can be reached at christie.choi@thecrimson.com.

—Staff writer Jorge O. Guerra can be reached at jorge.guerra@thecrimson.com. Follow him on Twitter @jorgeoguerra_.

Tags

Advertisement