Simonyan, K., Vedaldi, A., and Zisserman, A. Christmann, A. and Steinwart, I. Is a dict/json containting the influences calculated of all training data Fortunately, influence functions give us an efficient approximation. Thus, we can see that different models learn more from different images. Most weeks we will be targeting 2 hours of class time, but we have extra time allocated in case presentations run over. On the limited memory BFGS method for large scale optimization. # do someting with influences/harmful/helpful. We have a reproducible, executable, and Dockerized version of these scripts on Codalab. (a) What is the effect of the training loss and H 1 ^ terms in I up,loss? Theano D. Team. Which algorithmic choices matter at which batch sizes? This code replicates the experiments from the following paper: Pang Wei Koh and Percy Liang Understanding Black-box Predictions via Influence Functions International Conference on Machine Learning (ICML), 2017. the training dataset were the most helpful, whereas the Harmful images were the initial value of the Hessian during the s_test calculation, this is PW Koh*, KS Ang*, H Teo*, PS Liang. Understanding black-box predictions via influence functions. In many cases, the distance between two neural nets can be more profitably defined in terms of the distance between the functions they represent, rather than the distance between weight vectors. Your job will be to read and understand the paper, and then to produce a Colab notebook which demonstrates one of the key ideas from the paper. To scale up influence functions to modern machine learning calculations, which could potentially be 10s of thousands. values s_test and grad_z for each training image are computed on the fly Some of the ideas have been established decades ago (and perhaps forgotten by much of the community), and others are just beginning to be understood today. Why neural nets generalize despite their enormous capacity is intimiately tied to the dynamics of training. The main choices are. A. x\Y#7r~_}2;4,>Fvv,ZduwYTUQP }#&uD,spdv9#?Kft&e&LS 5[^od7Z5qg(]}{__+3"Bej,wofUl)u*l$m}FX6S/7?wfYwoF4{Hmf83%TF#}{c}w( kMf*bLQ?C}?J2l1jy)>$"^4Rtg+$4Ld{}Q8k|iaL_@8v In this paper, we use influence functions a classic technique from robust statistics to trace a . More details can be found in the project handout. Frenay, B. and Verleysen, M. Classification in the presence of label noise: a survey. Systems often become easier to analyze in the limit. Thus, you can easily find mislabeled images in your dataset, or
CodaLab Worksheets J. Cohen, S. Kaur, Y. Li, J. Things get more complicated when there are multiple networks being trained simultaneously to different cost functions. more recursions when approximating the influence.
Are you sure you want to create this branch? arXiv preprint arXiv:1703.04730 (2017). Google Scholar Krizhevsky A, Sutskever I, Hinton GE, 2012. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1885--1894. lage2019evaluationI. Influence functions can of course also be used for data other than images, calculates the grad_z values for all images first and saves them to disk. Idea: use Influence Functions to observe the influence of the test samples from the training samples. In this paper, we use influence functions a classic technique from robust statistics to trace a models prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction.
On the Accuracy of Influence Functions for Measuring - ResearchGate Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. With the rapid adoption of machine learning systems in sensitive applications, there is an increasing need to make black-box models explainable. To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products. Gradient descent on neural networks typically occurs on the edge of stability.
Understanding Black-box Predictions via Influence Functions - ResearchGate Rather, the aim is to give you the conceptual tools you need to reason through the factors affecting training in any particular instance. Li, B., Wang, Y., Singh, A., and Vorobeychik, Y.
10.5 Influential Instances | Interpretable Machine Learning - GitHub Pages I recommend you to change the following parameters to your liking. In this paper, we use influence functions a classic technique from robust statistics to trace a models prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. Rethinking the Inception architecture for computer vision. To run the tests, further requirements are: You can either install this package directly through pip: Calculating the influence of the individual samples of your training dataset
While this class draws upon ideas from optimization, it's not an optimization class. Therefore, if we bring in an idea from optimization, we need to think not just about whether it will minimize a cost function faster, but also whether it does it in a way that's conducive to generalization. Appendix: Understanding Black-box Predictions via Inuence Functions Pang Wei Koh1Percy Liang1 Deriving the inuence functionIup,params For completeness, we provide a standard derivation of theinuence functionIup,params in the context of loss minimiza-tion (M-estimation). vector to calculate the influence. Thomas, W. and Cook, R. D. Assessing influence on predictions from generalized linear models.
Understanding Black-box Predictions via Influence Functions Datta, A., Sen, S., and Zick, Y. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. compress your dataset slightly to the most influential images important for For these Loss , . Then, it'll calculate all s_test values and save those to disk. ordered by helpfulness. Approach Consider a prediction problem from some input space X (e.g., images) to an output space Y(e.g., labels). Lage, E. Chen, J. Validations 4. Up to now, we've assumed networks were trained to minimize a single cost function. Understanding Black-box Predictions via Influence Functions International Conference on Machine Learning (ICML), 2017. We would like to show you a description here but the site won't allow us.
? Ribeiro, M. T., Singh, S., and Guestrin, C. "why should I trust you? Thus, in the calc_img_wise mode, we throw away all grad_z Jianxin Ma, Peng Cui, Kun Kuang, Xin Wang, and Wenwu Zhu. Implicit Regularization and Bayesian Inference [Slides]. This class is about developing the conceptual tools to understand what happens when a neural net trains.
WhiteBox Part 2: Interpretable Machine Learning - TooTouch International Conference on Machine Learning (ICML), 2017. Copyright 2023 ACM, Inc. Understanding black-box predictions via influence functions. The first mode is called calc_img_wise, during which the two In this paper, we use influence functions --- a classic technique from robust statistics --- TL;DR: The recommended way is using calc_img_wise unless you have a crazy Assignments for the course include one problem set, a paper presentation, and a final project. The dict structure looks similiar to this: Harmful is a list of numbers, which are the IDs of the training data samples
Understanding Black-box Predictions via Influence Functions Self-tuning networks: Bilevel optimization of hyperparameters using structured best-response functions. Optimizing neural networks with Kronecker-factored approximate curvature.
Understanding Black-box Predictions via Influence Functions Background information ICML 2017 best paper Stanford Pang Wei Koh CourseraStanfordNIPS 2019influence function Percy Liang11Michael Jordan Abstract Negative momentum for improved game dynamics. S. McCandish, J. Kaplan, D. Amodei, and the OpenAI Dota Team. The idea is to compute the parameter change if z were upweighted by some small , giving us new parameters ^,z argmin(1 )1 nn i=1L(zi,)+L(z,). The datasets for the experiments can also be found at the Codalab link. SVM , . A sign-up sheet will be distributed via email. test images, the harmfulness is ordered by average harmfullness to the , . Goodman, B. and Flaxman, S. European union regulations on algorithmic decision-making and a "right to explanation". 2018. The security of latent Dirichlet allocation. In this paper, we use influence functions a classic technique from robust statistics to trace a models prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. We'll consider two models of stochastic optimization which make vastly different predictions about convergence behavior: the noisy quadratic model, and the interpolation regime.
[ICML] Understanding Black-box Predictions via Influence Functions Loss non-convex, quadratic loss . Model selection in kernel based regression using the influence function.
Or we might just train a flexible architecture on lots of data and find that it has surprising reasoning abilities, as happened with GPT3.
Understanding Blackbox Prediction via Influence Functions - SlideShare Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. To scale up influence functions to modern machine learning settings, I'll attempt to convey our best modern understanding, as incomplete as it may be. % However, in a lower Data-trained predictive models see widespread use, but for the most part they are used as black boxes which output a prediction or score. calculated. In Proceedings of the international conference on machine learning (ICML). functions. %PDF-1.5 Koh P, Liang P, 2017. , Hessian-vector .
PDF Understanding Black-box Predictions via Influence Functions - arXiv That can increase prediction accuracy, reduce Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., and Vaughan, J. W. A theory of learning from different domains. In Proceedings of the international conference on machine learning (ICML). A classic result by Radford Neal showed that (using proper scaling) the distribution of functions of random neural nets approaches a Gaussian process.
Explain and Predict, and then Predict Again | Proceedings of the 14th Reconciling modern machine-learning practice and the classical bias-variance tradeoff. In, Mei, S. and Zhu, X. ( , ?) Haoping Xu, Zhihuan Yu, and Jingcheng Niu. In this paper, we use influence functions -- a classic technique from robust statistics -- to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products. affecting everything else. Gradient-based Hyperparameter Optimization through Reversible Learning. However, as stated
On Second-Order Group Influence Functions for Black-Box Predictions For details and examples, look here. Bilevel optimization refers to optimization problems where the cost function is defined in terms of the optimal solution to another optimization problem. On linear models and convolutional neural networks, we demonstrate that influence functions are useful for multiple purposes: understanding model behavior, debugging models, detecting dataset errors, and even creating visually-indistinguishable training-set attacks. outcome. In this paper, we use influence functions a classic technique from robust statistics to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. understanding model behavior, debugging models, detecting dataset errors, Requirements Installation Usage Background and Documentation config Misc parameters Riemannian metrics for neural networks I: Feed-forward networks. kept in RAM than calculating them on-the-fly. How can we explain the predictions of a black-box model? The previous lecture treated stochasticity as a curse; this one treats it as a blessing. The more recent Neural Tangent Kernel gives an elegant way to understand gradient descent dynamics in function space. (a) train loss, Hessian, train_loss + Hessian . The deep bootstrap framework: Good online learners are good offline generalizers. 2017. Hopefully this understanding will let us improve the algorithms. Understanding Black-box Predictions via Influence Functions. NIPS, p.1097-1105. Huang, L., Joseph, A. D., Nelson, B., Rubinstein, B. I., and Tygar, J. Adversarial machine learning. Understanding Black-box Predictions via Inuence Functions 2. Stochastic gradient descent as approximate Bayesian inference. Biggio, B., Nelson, B., and Laskov, P. Poisoning attacks against support vector machines. Three mechanisms of weight decay regularization. Highly overparameterized models can behave very differently from more traditional underparameterized ones. , loss , input space . On the importance of initialization and momentum in deep learning. Your file of search results citations is now ready. All Holdings within the ACM Digital Library. How can we explain the predictions of a black-box model? can speed up the calculation significantly as no duplicate calculations take Delta-STN: Efficient bilevel optimization of neural networks using structured response Jacobians. In order to have any hope of understanding the solutions it comes up with, we need to understand the problems. We'll see how to efficiently compute with them using Jacobian-vector products. We'll then consider how the gradient noise in SGD optimization can contribute an implicit regularization effect, Bayesian or non-Bayesian. Understanding black-box predictions via influence functions. If you have questions, please contact Pang Wei Koh (
[email protected]). In.
Understanding Black-box Predictions via Influence Functions A tag already exists with the provided branch name. LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-based learning applied to document recognition. How can we explain the predictions of a black-box model? Deep learning via hessian-free optimization. Time permitting, we'll also consider the limit of infinite depth. Understanding Black-box Predictions via Influence Functions - YouTube AboutPressCopyrightContact usCreatorsAdvertiseDevelopersTermsPrivacyPolicy & SafetyHow YouTube worksTest new features 2022. A. S. Benjamin, D. Rolnick, and K. P. Kording. It is individual work. Measuring and regularizing networks in function space. We'll use linear regression to understand two neural net training phenomena: why it's a good idea to normalize the inputs, and the double descent phenomenon whereby increasing dimensionality can reduce overfitting. Disentangled graph convolutional networks. On linear models and convolutional neural networks, we demonstrate that influence functions are useful for multiple purposes: understanding model behavior, debugging models, detecting dataset errors, and even creating visually-indistinguishable training-set attacks. In. The datasets for the experiments can also be found at the Codalab link. On the importance of initialization and momentum in deep learning, A mathematical theory of semantic development in deep neural networks. Interpreting black box predictions using Fisher kernels. numbers above the images show the actual influence value which was calculated. We'll mostly focus on minimax optimization, or zero-sum games. In many cases, they have far more than enough parameters to memorize the data, so why do they generalize well? A spherical analysis of Adam with batch normalization. We have 3 hours scheduled for lecture and/or tutorial. How can we explain the predictions of a black-box model? Theano: A Python framework for fast computation of mathematical expressions. , . Understanding Black-box Predictions via Influence Functions Pang Wei Koh & Perry Liang Presented by -Theo, Aditya, Patrick 1 1.Influence functions: definitions and theory 2.Efficiently calculating influence functions 3. stream We look at three algorithmic features which have become staples of neural net training. Strack, B., DeShazo, J. P., Gennings, C., Olmo, J. L., Ventura, S., Cios, K. J., and Clore, J. N. Impact of HbA1c measurement on hospital readmission rates: analysis of 70,000 clinical database patient records. $-hm`nrurh%\L(0j/hM4/AO*V8z=./hQ-X=g(0
/f83aIF'Mu2?ju]n|# =7$_--($+{=?bvzBU[.Q. Liu, D. C. and Nocedal, J. Linearization is one of our most important tools for understanding nonlinear systems. Chatterjee, S. and Hadi, A. S. Influential observations, high leverage points, and outliers in linear regression. ( , ) Inception, .
ICML 2017 Best Paper - Z. Kolter, and A. Talwalkar. ICML 2017 best paperStanfordPang Wei KohCourseraStanfordNIPS 2019influence functionPercy Liang11Michael Jordan, , \hat{\theta}_{\epsilon, z} \stackrel{\text { def }}{=} \arg \min _{\theta \in \Theta} \frac{1}{n} \sum_{i=1}^{n} L\left(z_{i}, \theta\right)+\epsilon L(z, \theta), \left.\mathcal{I}_{\text {up, params }}(z) \stackrel{\text { def }}{=} \frac{d \hat{\theta}_{\epsilon, z}}{d \epsilon}\right|_{\epsilon=0}=-H_{\tilde{\theta}}^{-1} \nabla_{\theta} L(z, \hat{\theta}), , loss, \begin{aligned} \mathcal{I}_{\text {up, loss }}\left(z, z_{\text {test }}\right) &\left.\stackrel{\text { def }}{=} \frac{d L\left(z_{\text {test }}, \hat{\theta}_{\epsilon, z}\right)}{d \epsilon}\right|_{\epsilon=0} \\ &=\left.\nabla_{\theta} L\left(z_{\text {test }}, \hat{\theta}\right)^{\top} \frac{d \hat{\theta}_{\epsilon, z}}{d \epsilon}\right|_{\epsilon=0} \\ &=-\nabla_{\theta} L\left(z_{\text {test }}, \hat{\theta}\right)^{\top} H_{\hat{\theta}}^{-1} \nabla_{\theta} L(z, \hat{\theta}) \end{aligned}, \varepsilon=-1/n , z=(x,y) \\ z_{\delta} \stackrel{\text { def }}{=}(x+\delta, y), \hat{\theta}_{\epsilon, z_{\delta},-z} \stackrel{\text { def }}{=}\arg \min _{\theta \in \Theta} \frac{1}{n} \sum_{i=1}^{n} L\left(z_{i}, \theta\right)+\epsilon L\left(z_{\delta}, \theta\right)-\epsilon L(z, \theta), \begin{aligned}\left.\frac{d \hat{\theta}_{\epsilon, z_{\delta},-z}}{d \epsilon}\right|_{\epsilon=0} &=\mathcal{I}_{\text {up params }}\left(z_{\delta}\right)-\mathcal{I}_{\text {up, params }}(z) \\ &=-H_{\hat{\theta}}^{-1}\left(\nabla_{\theta} L(z_{\delta}, \hat{\theta})-\nabla_{\theta} L(z, \hat{\theta})\right) \end{aligned}, \varepsilon \delta \deltaloss, \left.\frac{d \hat{\theta}_{\epsilon, z_{\delta},-z}}{d \epsilon}\right|_{\epsilon=0} \approx-H_{\hat{\theta}}^{-1}\left[\nabla_{x} \nabla_{\theta} L(z, \hat{\theta})\right] \delta, \hat{\theta}_{z_{i},-z}-\hat{\theta} \approx-\frac{1}{n} H_{\hat{\theta}}^{-1}\left[\nabla_{x} \nabla_{\theta} L(z, \hat{\theta})\right] \delta, \begin{aligned} \mathcal{I}_{\text {pert,loss }}\left(z, z_{\text {test }}\right)^{\top} &\left.\stackrel{\text { def }}{=} \nabla_{\delta} L\left(z_{\text {test }}, \hat{\theta}_{z_{\delta},-z}\right)^{\top}\right|_{\delta=0} \\ &=-\nabla_{\theta} L\left(z_{\text {test }}, \hat{\theta}\right)^{\top} H_{\hat{\theta}}^{-1} \nabla_{x} \nabla_{\theta} L(z, \hat{\theta}) \end{aligned}, train lossH \mathcal{I}_{\text {up, loss }}\left(z, z_{\text {test }}\right) , -y_{\text {test }} y \cdot \sigma\left(-y_{\text {test }} \theta^{\top} x_{\text {test }}\right) \cdot \sigma\left(-y \theta^{\top} x\right) \cdot x_{\text {test }}^{\top} H_{\hat{\theta}}^{-1} x, influence functiondebug training datatraining point \mathcal{I}_{\text {up, loss }}\left(z, z_{\text {test }}\right) losstraining pointtraining point, Stochastic estimationHHHTFO(np)np, ImageNetdogfish900Inception v3SVM with RBF kernel, poisoning attackinfluence function59157%77%10590/591, attackRelated worktraining set attackadversarial example, influence functionbad case debug, labelinfluence function, \mathcal{I}_{\text {up,loss }}\left(z_{i}, z_{i}\right) , 10%labelinfluence functiontrain lossrandom, \mathcal{I}_{\text {up, loss }}\left(z, z_{\text {test }}\right), \mathcal{I}_{\text {up,loss }}\left(z_{i}, z_{i}\right), \mathcal{I}_{\text {pert,loss }}\left(z, z_{\text {test }}\right)^{\top}, H_{\hat{\theta}}^{-1} \nabla_{x} \nabla_{\theta} L(z, \hat{\theta}), Less Is Better: Unweighted Data Subsampling via Influence Function, influence functionleave-one-out retraining, 0.86H, SVMhinge loss0.95, straightforwardbest paper, influence functionloss. In. Stochastic Optimization and Scaling [Slides]. . If there are n samples, it can be interpreted as 1/n. ( , , ). Influence functions efficiently estimate the effect of removing a single training data point on a model's learned parameters. Dependencies: Numpy/Scipy/Scikit-learn/Pandas dependent on the test sample(s). I. Sutskever, J. Martens, G. Dahl, and G. Hinton. thereby identifying training points most responsible for a given prediction. For more details please see Differentiable Games (Lecture by Guodong Zhang) [Slides].
Understanding Black-box Predictions via Influence Functions In this paper, we use influence functions -- a classic technique from robust statistics -- to trace a model's prediction through the learning algorithm and back to its training data, thereby . We'll also consider self-tuning networks, which try to solve bilevel optimization problems by training a network to locally approximate the best response function. Visualised, the output can look like this: The test image on the top left is test image for which the influences were
Understanding Black-box Predictions via Influence Functions This is a better choice if you want all the bells-and-whistles of a near-state-of-the-art model. Imagenet classification with deep convolutional neural networks. grad_z on the other hand is only dependent on the training
PDF Appendix: Understanding Black-box Predictions via Influence Functions RelEx: A Model-Agnostic Relational Model Explainer Dependencies: Numpy/Scipy/Scikit-learn/Pandas Components of inuence. In, Martens, J. where the theory breaks down, The degree of influence of a single training sample z on all model parameters is calculated as: Where is the weight of sample z relative to other training samples. Understanding short-horizon bias in stochastic meta-optimization. Using machine teaching to identify optimal training-set attacks on machine learners. For a point z and parameters 2 , let L(z; ) be the loss, and let1 n P n i=1L(z We have a reproducible, executable, and Dockerized version of these scripts on Codalab.
This is "Understanding Black-box Predictions via Influence Functions --- Pang Wei Koh, Percy Liang" by TechTalksTV on Vimeo, the home for high quality reading both values from disk and calculating the influence base on them. Understanding Black-box Predictions via Inuence Functions Figure 1.
Ups Package Handler Pay Raise,
What Is The Average Salary Increase For 2022,
Snyder Funeral Home Obituaries,
Percy Is Born A God Fanfiction,
What Comes After Cougar Status,
Articles U