ERATO感謝祭 Season IV
【参考】Satoshi Hara and Takanori Maehara. Enumerate Lasso Solutions for Feature Selection. In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI'17), pages 1985--1991, 2017.
1. The document discusses energy-based models (EBMs) and how they can be applied to classifiers. It introduces noise contrastive estimation and flow contrastive estimation as methods to train EBMs.
2. One paper presented trains energy-based models using flow contrastive estimation by passing data through a flow-based generator. This allows implicit modeling with EBMs.
3. Another paper argues that classifiers can be viewed as joint energy-based models over inputs and outputs, and should be treated as such. It introduces a method to train classifiers as EBMs using contrastive divergence.
- The document discusses linear regression models and methods for estimating coefficients, including ordinary least squares and regularization methods like ridge regression and lasso regression.
- It explains how lasso regression, unlike ordinary least squares and ridge regression, has the property of driving some of the coefficient estimates exactly to zero, allowing for variable selection.
- An example using crime rate data shows how lasso regression can select a more parsimonious model than other methods by setting some coefficients to zero.
- The document discusses linear regression models and methods for estimating coefficients, including ordinary least squares and regularization methods like ridge regression and lasso regression.
- It explains how lasso regression, unlike ordinary least squares and ridge regression, has the property of driving some of the coefficient estimates exactly to zero, allowing for variable selection.
- An example using crime rate data shows how lasso regression can select a more parsimonious model than other methods by setting some coefficients to zero.
Introduction of "the alternate features search" using RSatoshi Kato
Introduction of the alternate features search using R, proposed in the paper. S. Hara, T. Maehara, Finding Alternate Features in Lasso, 1611.05940, 2016.
* Satoshi Hara and Kohei Hayashi. Making Tree Ensembles Interpretable: A Bayesian Model Selection Approach. AISTATS'18 (to appear).
arXiv ver.: https://arxiv.org/abs/1606.09066#
* GitHub
https://github.com/sato9hara/defragTrees
Explanation in Machine Learning and Its ReliabilitySatoshi Hara
This document summarizes a presentation on explanation in machine learning. It discusses two types of explanations: saliency maps and similar examples. Saliency maps highlight important regions of an input that influenced a prediction. Similar examples provide instances from a database that are similar to the input. The document notes that the reliability of explanations has become a key concern, as explanations may not be valid or could be used maliciously. It reviews research evaluating the faithfulness and plausibility of explanations, and proposes tests like parameter randomization to evaluate faithfulness. The talk concludes that generating fake explanations could allow unfair models to appear fair, highlighting a risk of "fairwashing" that more research is needed to address.
Convex Hull Approximation of Nearly Optimal Lasso SolutionsSatoshi Hara
Satoshi Hara, Takanori Maehara. Convex Hull Approximation of Nearly Optimal Lasso Solutions. In Proceedings of 16th Pacific Rim International Conference on Artificial Intelligence, Part II, pages 350--363, 2019.
Theoretical Linear Convergence of Unfolded ISTA and its Practical Weights and...Satoshi Hara
【NeurIPS 2018 読み会 in 京都】
Theoretical Linear Convergence of Unfolded ISTA and its Practical Weights and Thresholds
https://papers.nips.cc/paper/8120-theoretical-linear-convergence-of-unfolded-ista-and-its-practical-weights-and-thresholds