In tensorflow, there are many usable features for tasks besides just for learning. One worth mentioning is the function for rendering framed rects for given boundingboxes that can be used for fast evaluation of predictions of an image. But what if we want to get fancy? We have recently faced a task, to crop given areas from an image, process them into features and render them back onto the image. This could be done in keras or tensorflow so that it can be fast and embedded in a model. Or even possibly – overlay an image, not with framed boundingboxes, but alphablend with filled rectangles. To note, we want a feature vector rendering, not a full graphics renderer with tensorflow, which already exists (at least in the form of this example).
Our researchers have been hard at work solving line item extraction from invoices over the past few months. We have made several leaps forward we will talk about in the post, and you can test to see how it works in our live demo, right now! The rest of the post is aimed at our fellow data science geeks and all about our performance indicators concerning line items.
Quick update about line items in Elis
We have a lot of results we are eager to share, as well as just the story of our journey towards breathtaking table extraction capabilities. Whether it’s intense hackathons, exciting competition among multiple teams, or just endless thorny issues on how to annotate something as complex and varied as invoice table data – but that’s for another blog post.
The gist of the story, short & sweet, is that users of the Elis verification interface can now enjoy dramatic quality improvement in the semi-automatic line item capture we offer, and fully automatic table extraction is now available in our Data Extraction API as an experimental feature.
At Rossum, we have been hard at work researching line item extraction from invoices. It is a daunting task, but we are not afraid. We know you have been waiting patiently to hear from us, so we have put together a brief update of what has been going on in research, as well as some conclusions we have made from the results thus far. There is still more to learn – but we now know we are on the right path!