Please use this identifier to cite or link to this item:
http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/4590
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kamakshi, V | - |
dc.contributor.author | Kamakshi, N C. | - |
dc.date.accessioned | 2024-06-09T13:31:45Z | - |
dc.date.available | 2024-06-09T13:31:45Z | - |
dc.date.issued | 2024-06-09 | - |
dc.identifier.uri | http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/4590 | - |
dc.description.abstract | Abstract Explainable Artificial Intelligence (XAI) has emerged as a crucial research area to address the interpretability challenges posed by complex machine learning models. In this survey paper, we provide a comprehensive analysis of existing approaches in the field of XAI, focusing on the tradeoff between model accuracy and interpretability. Motivated by the need to address this tradeoff, we conduct an extensive review of the literature, presenting a multi-view taxonomy that offers a new perspective on XAI methodologies. We analyze various sub-categories of XAI methods, considering their strengths, weaknesses, and practical challenges. Moreover, we explore causal relationships in model explanations and discuss approaches dedicated to explaining cross-domain classifiers. The latter is particularly important in scenarios where training and test data are sampled from different distributions. Drawing insights from our analysis, we propose future research directions, including exploring explainable allied learning paradigms, developing evaluation metrics for both traditionally trained and allied learning-based classifiers, and applying neural architectural search techniques to minimize the accuracy–interpretability tradeoff. This survey paper provides a comprehensive overview of the state-of-the-art in XAI, serving as a valuable resource for researchers and practitioners interested in understanding and advancing the field. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | explainable AI survey | en_US |
dc.subject | interpretable image classification | en_US |
dc.subject | cross-domain explainers | en_US |
dc.subject | causal explanations | en_US |
dc.subject | posthoc explanations | en_US |
dc.subject | antehoc explanations | en_US |
dc.subject | concept-based explanations | en_US |
dc.subject | natural language explanations | en_US |
dc.subject | counterfactual explanations | en_US |
dc.subject | model-agnostic explanations | en_US |
dc.title | Explainable Image Classification: The Journey So Far and the Road Ahead | en_US |
dc.type | Article | en_US |
Appears in Collections: | Year-2023 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Full Text.pdf | 1.57 MB | Adobe PDF | View/Open Request a copy |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.