Advertisement

Computer Science Professor Cynthia Dwork Delivers Annual Ding Shum Lecture

{shortcode-c25b2906a12d12cd6b2fa0dc694c6c6ec0ed428d}

Harvard Computer Science professor Cynthia Dwork discussed the shortcomings of risk prediction algorithms at the Center of Mathematical Sciences and Applications’ annual Ding Shum lecture Tuesday evening.

During the talk — titled “Measuring Our Chances: Risk Prediction in This World and its Betters” — Dwork presented her research to a crowd of nearly 100 attendees. The Ding Shum lecture, funded by Ding Lei and Harry Shum, covers an active area of research in applied mathematics. In the previous three years, the series was canceled due to the Covid-19 pandemic.

Tuesday’s discussion, which was also livestreamed online, was moderated by CMSA postdoctoral fellow Faidra Monachou.

Dwork began her talk by presenting what she described as a fundamental problem in how algorithms are applied.

Advertisement

“I’m claiming that risk prediction is the defining problem of AI, meaning that there’s a huge definitional problem associated with what we commonly call risk prediction,” Dwork said.

She said that although predictions may assign a numerical probability towards an event happening, these predictions are very difficult to interpret for one-time events, which either happen or do not.

“You have, maybe, intuitive senses of what these mean. But in fact, it’s quite mysterious. What is the meaning of the probability of an unrepeatable event?” Dwork asked.

In addition, it can be difficult to tell whether a prediction function is accurate based on observing binary outcomes, Dwork said.

“If I predict something with three quarters probability, both ‘yes’ and ‘no’ are possible outcomes. And when we see one, we don’t know whether I was right about that three quarters,” Dwork said.

“How do we say: is that a good function, or is that a bad function? We don’t even know what it’s supposed to be doing,” she added.

To illustrate the complexity of the issue, Dwork asked the audience to consider two worlds. In one, “there is no uncertainty in what’s going to happen in the future,” but current predictors may lack enough information to make an accurate guess. In the other, “there’s real inherent uncertainty,” meaning that outcomes may change even if prediction processes are perfect.

The issue, Dwork said, is “these two worlds are really indistinguishable.”

Since all algorithms take in inputs to predict a “yes” or “no” output, Dwork said the predictions will not reveal whether real life is “arbitrary and real valued” or binary.

But Dwork said in her research, she has been able to draw from the field of pseudorandom numbers to help detect the difference between these two situations.

“So now, instead of trying to determine whether a sequence is random, or pseudorandom, our distinguishers are trying to determine whether an outcome is drawn from real life,” she explained.

Dwork, who is also a faculty affiliate at Harvard Law School and a distinguished scientist at Microsoft Research, is known for her contributions to the areas of differential privacy and algorithmic fairness. In 2020, she received the Institute of Electrical and Electronics Engineers’ Hamming Medal for her work in “privacy, cryptography, and distributed computing, and for leadership in developing differential privacy.”

To conclude her talk, Dwork presented a roadmap of her future research.

“The next step is to try to develop an understanding of what it is that transformations that satisfy this technical property can actually accomplish,” Dwork said.

Tags

Advertisement