shapr: Prediction Explanation with Dependence-Aware Shapley Values
Source:R/shapr-package.R
shapr-package.Rd
Complex machine learning models are often hard to interpret. However, in many situations it is crucial to understand and explain why a model made a specific prediction. Shapley values is the only method for such prediction explanation framework with a solid theoretical foundation. Previously known methods for estimating the Shapley values do, however, assume feature independence. This package implements methods which accounts for any feature dependence, and thereby produces more accurate estimates of the true Shapley values. An accompanying 'Python' wrapper ('shaprpy') is available through the GitHub repository.
Author
Maintainer: Martin Jullum Martin.Jullum@nr.no (ORCID)
Authors:
Lars Henry Berge Olsen lhbolsen@nr.no (ORCID)
Annabelle Redelmeier ardelmeier@gmail.com
Jon Lachmann Jon@lachmann.nu (ORCID)
Nikolai Sellereite nikolaisellereite@gmail.com (ORCID)
Other contributors:
Anders Løland Anders.Loland@nr.no [contributor]
Jens Christian Wahl jens.c.wahl@gmail.com [contributor]
Camilla Lingjærde [contributor]
Norsk Regnesentral [copyright holder, funder]