DARPA Is Working to Make AI More Trustworthy

Futurism – When it comes to [artificial intelligence, …] there’s a certain “black box” behind decisions that makes it so that even AI developers themselves don’t quite understand or anticipate the decisions an AI is making. We do know that neural networks are taught to make these choices by exposing them to a huge data set. From there, AIs train themselves into applying what they learn. It’s rather difficult to trust what one doesn’t understand.

The U.S. Defense Advanced Research Projects Agency (DARPA) wants to break this black box, and the first step is to fund eight computer science professors from Oregon State University (OSU) with a $6.5 million research grant. “Ultimately, we want these explanations to be very natural — translating these deep network decisions into sentences and visualizations,” OSU’s Alan Fern, principal investigator for the grant, said in a press release.

Read more at Futurism.