Study the visualization methods to explain the results of CNNs. Implement at least two visualization algorithms on at least two datasets: at least one toy dataset and at least one real-world dataset (CIFAR and ImageNet belong to this class). Compare and analyze the visualization results among methods.
Visualizing Backpropagation in Neural Network Training at Any Scale, 2022. HiPlot Github, parallel coordinate plots
Data Visualization in Python, 2021.
Guide to Interpretable Machine Learning - Techniques to dispel the black box myth of deep learning. Towards Data Science, 2020.
Interpretability in Machine Learning, Medium, 2020.
Mengnan Du, Ninghao Liu, Xia Hu, "Techniques for Interpretable Machine Learning," Communications of the ACM, Vol. 63 No. 1, Pages 68-77, January 2020.
Decoding the Black Box: An Important Introduction to Interpretable Machine Learning Models in Python, 2019.
The great AI debate: Interpretability, Medium, 2019.
人工智慧新發展,可向人類解釋思考過程,2018.
MIT Lincoln Laboratory develops AI that shows its decision-making process, 2018
J. Yosinski, J. Clune, A. Nguyen, T. Fuchs, H. Lipson, "Understanding Neural Networks Through Deep Visualization," ICML 2015.
Generate Publication-Ready Plots Using Seaborn Library, 2020.
Visdom (GitHub) [PyTorch], Facebook Research
LIME vs. SHAP: Which is Better for Explaining Machine Learning Models? 2020
Explainable MNIST classification, 2020.
Papers about interpretable CNN (GitHub)
GWU_data_mining/10_model_interpretability (GitHub)