# awesome_deep_learning_interpretability
深度学习近年来关于模型解释性的相关论文。
按引用次数排序可见[引用排序](./sort_cite.md)
159篇论文pdf(有2篇需要上scihub找)上传到[腾讯微云](https://share.weiyun.com/5ddB0EQ)。
不定期更新。
|Year|Publication|Paper|Citation|code|
|:---:|:---:|:---:|:---:|:---:|
|2020|CVPR|[Explaining Knowledge Distillation by Quantifying the Knowledge](https://arxiv.org/pdf/2003.03622.pdf)|81|
|2020|CVPR|[High-frequency Component Helps Explain the Generalization of Convolutional Neural Networks](https://openaccess.thecvf.com/content_CVPR_2020/papers/Wang_High-Frequency_Component_Helps_Explain_the_Generalization_of_Convolutional_Neural_Networks_CVPR_2020_paper.pdf)|289|
|2020|CVPRW|[Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks](https://openaccess.thecvf.com/content_CVPRW_2020/papers/w1/Wang_Score-CAM_Score-Weighted_Visual_Explanations_for_Convolutional_Neural_Networks_CVPRW_2020_paper.pdf)|414|[Pytorch](https://github.com/haofanwang/Score-CAM)
|2020|ICLR|[Knowledge consistency between neural networks and beyond](https://arxiv.org/pdf/1908.01581.pdf)|28|
|2020|ICLR|[Interpretable Complex-Valued Neural Networks for Privacy Protection](https://arxiv.org/pdf/1901.09546.pdf)|23|
|2019|AI|[Explanation in artificial intelligence: Insights from the social sciences](https://arxiv.org/pdf/1706.07269.pdf)|3248|
|2019|NMI|[Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead](https://arxiv.org/pdf/1811.10154.pdf)|3505|
|2019|NeurIPS|[Can you trust your model's uncertainty? Evaluating predictive uncertainty under dataset shift](https://papers.nips.cc/paper/9547-can-you-trust-your-models-uncertainty-evaluating-predictive-uncertainty-under-dataset-shift.pdf)|1052|-|
|2019|NeurIPS|[This looks like that: deep learning for interpretable image recognition](http://papers.nips.cc/paper/9095-this-looks-like-that-deep-learning-for-interpretable-image-recognition.pdf)|665|[Pytorch](https://github.com/cfchen-duke/ProtoPNet)|
|2019|NeurIPS|[A benchmark for interpretability methods in deep neural networks](https://papers.nips.cc/paper/9167-a-benchmark-for-interpretability-methods-in-deep-neural-networks.pdf)|413|
|2019|NeurIPS|[Full-gradient representation for neural network visualization](http://papers.nips.cc/paper/8666-full-gradient-representation-for-neural-network-visualization.pdf)|155|
|2019|NeurIPS|[On the (In) fidelity and Sensitivity of Explanations](https://papers.nips.cc/paper/9278-on-the-infidelity-and-sensitivity-of-explanations.pdf)|226|
|2019|NeurIPS|[Towards Automatic Concept-based Explanations](http://papers.nips.cc/paper/9126-towards-automatic-concept-based-explanations.pdf)|342|[Tensorflow](https://github.com/amiratag/ACE)|
|2019|NeurIPS|[CXPlain: Causal explanations for model interpretation under uncertainty](http://papers.nips.cc/paper/9211-cxplain-causal-explanations-for-model-interpretation-under-uncertainty.pdf)|133|
|2019|CVPR|[Interpreting CNNs via Decision Trees](http://openaccess.thecvf.com/content_CVPR_2019/papers/Zhang_Interpreting_CNNs_via_Decision_Trees_CVPR_2019_paper.pdf)|293|
|2019|CVPR|[From Recognition to Cognition: Visual Commonsense Reasoning](http://openaccess.thecvf.com/content_CVPR_2019/papers/Zellers_From_Recognition_to_Cognition_Visual_Commonsense_Reasoning_CVPR_2019_paper.pdf)|544|[Pytorch](https://github.com/rowanz/r2c)|
|2019|CVPR|[Attention branch network: Learning of attention mechanism for visual explanation](http://openaccess.thecvf.com/content_CVPR_2019/papers/Fukui_Attention_Branch_Network_Learning_of_Attention_Mechanism_for_Visual_Explanation_CVPR_2019_paper.pdf)|371|
|2019|CVPR|[Interpretable and fine-grained visual explanations for convolutional neural networks](http://openaccess.thecvf.com/content_CVPR_2019/papers/Wagner_Interpretable_and_Fine-Grained_Visual_Explanations_for_Convolutional_Neural_Networks_CVPR_2019_paper.pdf)|116|
|2019|CVPR|[Learning to Explain with Complemental Examples](http://openaccess.thecvf.com/content_CVPR_2019/papers/Kanehira_Learning_to_Explain_With_Complemental_Examples_CVPR_2019_paper.pdf)|36|
|2019|CVPR|[Revealing Scenes by Inverting Structure from Motion Reconstructions](http://openaccess.thecvf.com/content_CVPR_2019/papers/Pittaluga_Revealing_Scenes_by_Inverting_Structure_From_Motion_Reconstructions_CVPR_2019_paper.pdf)|84|[Tensorflow](https://github.com/francescopittaluga/invsfm)|
|2019|CVPR|[Multimodal Explanations by Predicting Counterfactuality in Videos](http://openaccess.thecvf.com/content_CVPR_2019/papers/Kanehira_Multimodal_Explanations_by_Predicting_Counterfactuality_in_Videos_CVPR_2019_paper.pdf)|26|
|2019|CVPR|[Visualizing the Resilience of Deep Convolutional Network Interpretations](http://openaccess.thecvf.com/content_CVPRW_2019/papers/Explainable%20AI/Vasu_Visualizing_the_Resilience_of_Deep_Convolutional_Network_Interpretations_CVPRW_2019_paper.pdf)|2|
|2019|ICCV|[U-CAM: Visual Explanation using Uncertainty based Class Activation Maps](http://openaccess.thecvf.com/content_ICCV_2019/papers/Patro_U-CAM_Visual_Explanation_Using_Uncertainty_Based_Class_Activation_Maps_ICCV_2019_paper.pdf)|61|
|2019|ICCV|[Towards Interpretable Face Recognition](https://arxiv.org/pdf/1805.00611.pdf)|66|
|2019|ICCV|[Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded](http://openaccess.thecvf.com/content_ICCV_2019/papers/Selvaraju_Taking_a_HINT_Leveraging_Explanations_to_Make_Vision_and_Language_ICCV_2019_paper.pdf)|163|
|2019|ICCV|[Understanding Deep Networks via Extremal Perturbations and Smooth Masks](http://openaccess.thecvf.com/content_ICCV_2019/papers/Fong_Understanding_Deep_Networks_via_Extremal_Perturbations_and_Smooth_Masks_ICCV_2019_paper.pdf)|276|[Pytorch](https://github.com/facebookresearch/TorchRay)|
|2019|ICCV|[Explaining Neural Networks Semantically and Quantitatively](http://openaccess.thecvf.com/content_ICCV_2019/papers/Chen_Explaining_Neural_Networks_Semantically_and_Quantitatively_ICCV_2019_paper.pdf)|49|
|2019|ICLR|[Hierarchical interpretations for neural network predictions](https://arxiv.org/pdf/1806.05337.pdf)|111|[Pytorch](https://github.com/csinva/hierarchical-dnn-interpretations)|
|2019|ICLR|[How Important Is a Neuron?](https://arxiv.org/pdf/1805.12233.pdf)|101|
|2019|ICLR|[Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks](https://arxiv.org/pdf/1712.06302.pdf)|56|
|2018|ICML|[Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples](https://arxiv.org/pdf/1711.09576.pdf)|169|[Pytorch](https://github.com/tech-srl/lstar_extraction)|
|2019|ICML|[Towards A Deep and Unified Understanding of Deep Neural Models in NLP](http://proceedings.mlr.press/v97/guan19a/guan19a.pdf)|80|[Pytorch](https://github.com/icml2019paper2428/Towards-A-Deep-and-Unified-Understanding-of-Deep-Neural-Models-in-NLP)|
|2019|ICAIS|[Interpreting black box predictions using fisher kernels](https://arxiv.org/pdf/1810.10118.pdf)|80|
|2019|ACMFAT|[Explaining explanations in AI](https://s3.amazonaws.com/academia.edu.documents/57692790/Mittelstadt__Russell_and_Wachter_-_2019_-_Explaining_Explanations_in_AI.pdf?response-content-disposition=inline%3B%20filename%3DExplaining_Explanations_in_AI.pdf&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIATUSBJ6BAJW2TMFXG%2F20200528%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20200528T052420Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEHUaCXVzLWVhc3QtMSJIMEYCIQDCCKV%2FpUmJZHn03yzTquQ%2FNMtaXW%2FC63WPmQd%2FhImmYAIhAMelsFwqb9IfV4W2xlfL%2FHk4qeovouLdYbXKf%2B1%2FMwvyKr0DCM7%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEQABoMMjUwMzE4ODExMjAwIgytA%2BM6OWOGN4XLrlUqkQN2f8ywZT0AEUzKdbVDyGvZN%2B1repdgXrfgT2rAJiGacTK8IRCoyECvRgcgS%2BWJWYpjS7CjoL%2BlTm1c%2BWDWdo%2FYnVM0U6shk9OQivK089W064ZR64AQCCkBDutI3vYhP%2BOJ8AtEUDE%2B7W5EWVQ4zeUDG4ryxzdomFnrHpzA5fp05qWrOmPS0vd%2FFabC%2FPKXO

好家伙VCC
- 粉丝: 4664
最新资源
- 大数据优势下的高中英语教学策略.docx
- 云计算环境下的网络安全估计模型态势仿真.doc
- ATS单片机的智能电热水器的设计方案.doc
- SQL数据库课程研究设计模板.doc
- 51单片机的智能频率计课程方案设计书.doc
- 企业信息化管理建议.docx
- 网站的规划与建设.ppt
- 计算机信息系统保密技术及安全管理.doc
- Excel表格模板:上半年销售业绩分析报告.xlsx
- DSP嵌入式图像处理方案设计书.doc
- 项目管理系统化建设内容及验收标准.doc
- 信息管理与计算机应用技术的融合研究.docx
- 微课在高职《计算机应用基础》课程单元教学中的设计与应用思考.docx
- 图书信息管理系统-c语言.doc
- 以单片机ATS为控制核交通灯设计.doc
- NAND-Flash的驱动程序设计措施.doc
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈


