IBM proposes safeguard document to increase trust in AI
Researchers from IBM recently proposed a new safeguard designed to increase transparency and trust in artificial intelligence (AI), according to research published on the company’s website.
Scientists from IBM’s Trusted AI division proposed AI developers publish what is called a supplier’s declaration of conformity (SDoC) document that would detail the safety and data sets used to test an algorithm, along with a host of other information.
“The accuracy and reliability of machine learning algorithms are an important concern for suppliers of artificial intelligence services, but considerations beyond accuracy, such as safety, security, and provenance, are also critical elements to engender consumers’ trust in a service,” wrote lead author Michael Hind and colleagues.
Hind et al. suggested SDoCs would answer specific questions to provide greater transparency into the safety and performance of an algorithm, such as: “Which datasets was the service tested on?” and “Was the dataset checked for bias?”
Unlike other industries, such as transportation, infrastructure and finance which provide exhaustive testing based on known metrics, the authors argue there is no equivalent test to ensure AI algorithms will perform as claimed.
The ultimate goal of the SDoC is to provide additional insight and ultimately trust in AI algorithms, the group noted, but there is still a long road ahead.
“The final piece to build trust is transparent documentation about the service, which we see as a variation on declarations of conformity,” Hind and colleagues wrote. “We are not there yet, but we see our work as a first step at defining which questions to ask and metrics to measure towards development and adoption of broader industry practices and standards.”