Current Research Status of Explainability in Artificial Intelligence and Evaluation of its Application Effects in the Medical Field
Author:
Affiliation:

Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences

Funding:

National Natural Science Foundation of China

Ethical statement:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
    Abstract:

    Artificial intelligence interpretability refers to the ability of people to understand and interpret the decision-making process of machine learning models. Research in this field aims to improve the transparency of machine learning algorithms, making their decisions more trustworthy and explainable. Interpretability is crucial in artificial intelligence systems, especially in sensitive and critical decision-making domains such as healthcare, finance, and law. By providing interpretability, people can better understand the reasoning behind the model''s decisions, ensuring that they are fair, robust, and ethical. In the continuously evolving field of artificial intelligence, enhancing the interpretability of models is a key step towards achieving trustworthy and sustainable AI. The article outlines the development history of interpretable artificial intelligence and the technical characteristics of various interpretability methods, with a particular focus on interpretability in the medical field. It provides a more in-depth discussion of the limitations of current methods on medical imaging datasets and proposes possible future directions for exploration.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
History
  • Received:March 12,2024
  • Revised:March 12,2024
  • Adopted:
  • Online: April 15,2024
  • Published: