If you have ever come in touch with neural networks, you are probably familiar with the black box problem [1, 2]. Compared to many other algorithms from the glass box category , neural networks are inherently difficult to dissect. This should come as no surprise. We use neural networks to find solutions to problems which are difficult for humans to put into the language of algorithms. Whenever it is difficult for an expert to find features which would help any other Machine Learning algorithm, that’s typically where neural networks come into the picture and blow the competition out of the water.
So what can be done? As it turns out, the inherent difficulty of explaining how neural networks work does not deter everyone and there are actually people who, I would say, are even drawn to the difficulty . I still think the proposed methods are far from explaining how a neural network works on the entire dataset, on all classes etc. Right now, in my opinion, the explanation methods are most powerful in a sample-by-sample examination. They are, for example, able to tell you which parts of the image play the most important role in an image classification task. I would still argue that there are important conclusions to be drawn about the task as a whole.
Even though the methods are very general, in the end, I will of course focus on models we use at Rossum for invoices. If you last until the end of this blog post, I promise you will see some nice results regarding invoices and tips on how to use those in your own projects. Continue reading