Concept-based Explanation of Deep Visual Categorization Models
With the supervisions of Professors Yongsheng Gao, Jun Zhou, and Andrew Lewis at The School of Engineering and Built Environment, Griffith University, this project tackles the limitations of conventional CNN explainability techniques—such as GradCAM and EigenCAM—that reveal where a model looked but not what it sees.
Traditional methods that use dimensionality reduction on intermediate-layer feature maps aim to uncover human-interpretable concepts; however, they often rely on linear reconstruction assumptions and provide only a limited view of model faithfulness. Specifically, while fidelity measures how accurately the discovered concepts predict outcomes, they do not address the consistency or meaningfulness of these concepts, and the linearity assumption can lead to significant information loss.
To address these challenges, the project introduces several frameworks published at top-tier conferences and a few more underway in A* (CORE Ranking) journals and conferences. These frameworks collectively aim to provide more meaningful, consistent, and insightful explanations of CNN logic.
Grants and Sponsors
- Griffith University International Postgraduate Research Scholarship
- Griffith University Postgraduate Research Scholarship
- Australian Research Council (ARC) Centre Program IH180100002, “ARC Research Hub for Driving Farming Productivity and Disease Prevention” (2019-2024), Lead Chief Investigator & Director: Yongsheng Gao together with 21 Chief Investigators and Partner Investigators from 6 universities, CSIRO, and 5 industry partners.
- ARC Research Hub for Driving Farming Productivity and Disease Prevention
- Institute for Integrated and Intelligent Systems (IIIS)
Journal Publication from the project
Akpudo, U. E., Gao, Y., Zhou, J., & Lewis, A. (2024, July). Coherentice: Invertible Concept-Based Explainability Framework for CNNs beyond Fidelity. In 2024 IEEE International Conference on Multimedia and Expo (ICME) (pp. 1-6). IEEE.
Akpudo, U. E., Yu, X., Zhou, J., & Gao, Y. (2023, November). What EXACTLY are We Looking at?: Investigating for Discriminance in Ultra-Fine-Grained Visual Categorization Tasks. In 2023 International Conference on Digital Image Computing: Techniques and Applications (DICTA) (pp. 129-136). IEEE.
Akpudo, U. E., Yu, X., Zhou, J., & Gao, Y. (2023, November). NCAF: NTD-based Concept Activation Factorisation Framework for CNN Explainability. In 2023 38th International Conference on Image and Vision Computing New Zealand (IVCNZ) (pp. 1-6). IEEE.
Akpudo, U. E., Gao, Y., Lewis, A., Tenagyei, E. K., Liao, Y., & Zhou, J. (2024, December). Evaluating Concept Explanations for CNNs Under Adversarial Image Transformations. In International Conference on Intelligent and Innovative Computing Applications (pp. 37-44).
Akpudo, U. E., Effoduh, J. O., Kong, J. D., & Gao, Y. (2024, December). Unveiling AI Concerns for Sub-Saharan Africa and its Vulnerable Groups. In International Conference on Intelligent and Innovative Computing Applications (pp. 45-55).
Liao, Y., Zhang, W., Tenagyei, E. K., Akpudo, U. E., & Gao, Y. (2024, December). Interpretable Protocol: A Novel Learning Strategy for COVID-19 Diagnosis on Chest-X-Ray Images. In International Conference on Intelligent and Innovative Computing Applications (pp. 22-29).