What a lame-ass article….lots of “what’s” but no insight into “how’s”
AI is already a growing presence in our world, used for tasks such as analyzing MRIs, giving financial advice and even composing music.
- Among the many questions about the ethics and governance of using it, the most important may be this: Should AI be required to explain its decisions?
- AI functions — must be interrogated, understood and improved upon.
- designing artificial intelligence so that it can describe, in human-readable terms, the reasons for a specific decision
- algorithms, especially those developed using advanced machine learning techniques, like deep neural networks, can be so complex that not even the designers fully understand how they make decisions
- processed by multiple layers of self-modifying algorithms – designers are not always able to determine, post hoc, which pathway was used
Entrusting important decisions to a system that can’t explain itself presents obvious dangers – judge’s sentence was based in part on a risk score for Loomis generated by COMPAS, a commercial risk-assessment tool used, according to one study, “to assess more than 1 million offenders” in the past two decades
- AI-generated risk score, because it relied on a proprietary algorithm whose exact methodology is unknown
- need to develop ways for AI to translate its “thinking” into terms humans can understand