Categories
Meetings

Interpretability of deep neural networks for computer vision

Francesco Di Ciccio will discuss the importance to make interpretable the models, showing a first experiment with the Layer-wise Relevance Propagation (LRP) technique.

Title: Interpretability of deep neural networks for computer vision

Machine learning methods are widely used in both commercial applications and academia in order to make inferences in a wide range of areas. Extracting information from data in order to make accurate predictions is the predominant goal in many such applications, although it could come at the cost of the explainability of the applied model. A focus in the design of a model should be placed on implementing tools that assist the understanding of: output results with respect to the parameters of the model and to their consistency to the domain knowledge; choice of the hyperparameters and their interpretation with respect to the domain of the application. In some domains more than others the importance of choosing the correct features is especially important, such as in medicine or economics, where decisions are made on the assumption that the predicted results are obtained from a proper representation of the problem at hand. In such fields, there is a strong reliance on the interpretability of the model, given the higher interest in knowing ‘Why?’ and further identifying those parameters that caused the model to make such predictions. The interest in the interpretability of models is also shared by other fields, such as computer vision, which for simplicity will be used as a reference to investigate the functioning of the Layer-wise Relevance Propagation (LRP) technique. In this application, the focus is on a simple classification task with the MNIST dataset.

When: December 03 at 11.30

Where: in presentia (032_A_P03_3140) or online

Leave a Reply

Your email address will not be published. Required fields are marked *