Abstract:Artificial intelligence interpretability refers to the ability of people to understand and interpret the decision-making process of machine learning models. Research in this field aims to improve the transparency of machine learning algorithms, making their decisions more trustworthy and explainable. Interpretability is crucial in artificial intelligence systems, especially in sensitive and critical decision-making domains such as healthcare, finance, and law. By providing interpretability, people can better understand the reasoning behind the model''s decisions, ensuring that they are fair, robust, and ethical. In the continuously evolving field of artificial intelligence, enhancing the interpretability of models is a key step towards achieving trustworthy and sustainable AI. The article outlines the development history of interpretable artificial intelligence and the technical characteristics of various interpretability methods, with a particular focus on interpretability in the medical field. It provides a more in-depth discussion of the limitations of current methods on medical imaging datasets and proposes possible future directions for exploration.