Bi-ICE: An Inner Interpretable Framework for Image Classification via Bi-directional Interactions between Concept and Input Embeddings
Abstract
Inner interpretability is a promising field focused on uncovering the internal mechanisms of AI systems and developing scalable automated methods to understand these systems at a mechanistic level.While significant research has been conducted on large language models, limited attention has been paid to applying inner interpretability to large-scale image tasks, focusing primarily on architectural and functional levels to visualize learned concepts.In this paper, we first present a conceptual framework that supports inner interpretability and multilevel analysis for large-scale image classification tasks.Specifically, we introduce the Bi-directional Interaction between Concept and Input Embeddings (Bi-ICE) module, which facilitates interpretability across the computational, algorithmic, and implementation levels.This module enhances transparency by generating predictions based on human-understandable concepts, quantifying their contributions, and localizing them within the inputs.Finally, we showcase enhanced transparency in image classification, measuring concept contributions, and pinpointing their locations within the inputs. Our approach highlights algorithmic interpretability by demonstrating the process of concept learning and its convergence.