Is transparency really the answer to the ‘black box’ problem?
To improve transparency, black box algorithms are increasingly being built with functions that explain their diagnostic findings. But a recent NPR report examined how this isn’t always effective, and why a different approach to creating algorithms may be the answer.
Cynthia Rudin, a computer scientist at Duke University in Durham, North Carolina, acknowledged extra time and effort must be put into algorithms to ensure transparency in life or death situations, but argued “explanation model” algorithms, which run alongside the black box algorithms, can also be damaging.
"These explanation models can be very dangerous," she said. "They can give you a false sense of security for a model that is not that great."
Similarly, Nigam Shah, a biomedical informatics specialist at Stanford University, told NPR that adequate testing should be the core focus of whether an algorithm can be trusted. A new mindset might be required, Shah said.
“I firmly believe that we should be thinking about algorithms differently," said Shah, in the report. “The right question to ask is, 'When is a black box OK?'"
Read the entire story below.