Automating data extraction from invoices using Rossum API and UiPath

In this tutorial, you will learn how to fully automate the process of data extraction from invoices using UiPath Studio and Rossum Elis Data Extraction API. Rossum’s cognitive data capture technology is unique because it does not require an explicit setup for every invoice template, but is able to find information out of the box. In this example, we’ll start with a directory with PDF files and/or images, and get a CSV spreadsheet containing the values of automatically extracted fields for each invoice.

No programming skills are needed, but some familiarity with UiPath Studio is assumed to comfortably follow the tutorial. If you’re new to UiPath, check out their UiPath Studio Guide at Even if you do not plan to use UiPath, this tutorial may give you a blueprint to follow – also check out our Technical Integration Guide for more bird’s eye background.

This solution was created in cooperation with EY Czech Republic RPA team led by Antonín Raizl, EY CZ & SK RPA Hub Leader. Find more at EY is one of Rossum’s main RPA integration partners – read our UiPath case study to learn more about how EY used an end-to-end solution based on Rossum Elis and UiPath to solve an urgent issue for one of our clients.

A ready-to-use .xaml file can be downloaded at It can be included as a part of your own workflow, or used as a standalone component – you only need to customize the following variables in the Initialize sequence to make it work for you: pathToFiles, authKey and fieldsToExtract.

Before we start, you will need:

Our invoice data extraction workflow will consist of a flowchart containing four main activities: Initialize, Upload, Get extracted data and Save to CSV.

The first sequence will initialize all the variables needed to use the Elis Extraction API, as well as the field types we are interested in. In the Upload part, files will be uploaded to the API for asynchronous processing and the intermediate results (Elis document IDs) stored in a DataTable. Get extracted data will iterate through all IDs returned by the API, collect the results when they are ready and store them in another DataTable. In the last sequence, we just save the extracted field values as a CSV file. Let’s go through each sequence in detail.


The purpose of the Initialize sequence is to set the appropriate values for some variables that will be useful later, such as field types we are interested in, API endpoint, etc.

  1. Create a Flowchart called Extract data and add a Sequence activity called Initialize, connected to the Start node.
  2. Create the following variables (e.g. using the Variables panel). Their scope should be set to the Extract data flowchart, as we want them to be accessible across different sequences, except tableHeader, which can have a minimal scope.
    • pathToFiles (String): path to a directory containing the invoices to process
    • filesArray (String[]): paths to all files in the specified directory
    • uploadEndpoint (String): API endpoint for document uploading (for more details, see the Elis Data Exctraction API documentation at
    • authKey (String): your API authorization key
    • fieldsToExtract (String[]): here you can define the field types you want to store (see for a list of the extracted field types)
    • tableHeader (String): automatically generates table header based on the chosen field types
  3. Using the Assign activity, assign these variables appropriate values inside the sequence (note that string values must be enclosed in quotes). In this tutorial, we’ll use the following values:
    • pathToFiles = “C:\Users\Elis\invoices\”
    • filesArray = directory.GetFiles(pathToFiles)
    • uploadEndpoint = “”
    • authKey = “secret_key YOUR_API_KEY”
    • fieldsToExtract = {“invoice_id, “date_issue”, “date_due”, “amount_due”}
    • tableHeader = “filename,” + String.Join(“,”, fieldsToExtract)

To create a table header, we need the values from fieldsToExtract array joined by a comma. String filename,” is concatenated with the result of applying the String.Join function on the array. The first argument specifies the character to join these elements, and the second is an array containing the elements.

Finally, we will need two DataTables – one to store invoice names and paths mapped to Elis invoice IDs returned from the API, and the other to store the final results:

  1. Create two DataTable variables: DT_Invoices and DT_Extracted accessible in the Extract data scope.
  2. Using Build Data Table activity, build a DataTable with three String columns: Invoice Name, Invoice Path, Elis Invoice ID. Set DT_Invoices as an output.

  1. Add a Generate Data Table activity, creating a DataTable with tableHeader as an input, DT_Extracted as an output DataTable and “,” as a column separator.


This sequence will handle uploading of your documents to the Elis Data Extraction API. After a file is uploaded for asynchronous processing, you will obtain a JSON response with an ID that will be used later to get the extracted data.

  1. The following variables need to be defined first:
    • filePath (String) – path to a file
    • fileName (String) – name of the file
    • resultJSON (String) – a variable to store the JSON response from the API
    • jsonDes (Newtonsoft.Json.Linq.JObject) – deserialized JSON object
    • documentID (String) – document ID returned from the API
  2. Create a For Each activity that will iterate through each file in filesArray and connect it to the Initialize sequence.
  3. In the Body section, assign filePath and fileName variables the following values, using the Assign activity:
    • filePath = file.ToString
    • fileName = path.GetFileName(filePath)
  4. Add an If activity with a condition filePath.Contains(“.pdf”) to upload only PDF files from the directory. Elis Data Extraction API currently supports PDF files, as well as PNG and JPG image formats. You can use logical OR to allow uploading of other formats.

To set up the uploading, a sequence of activities needs to be added to the Then branch of the If activity. In this sequence, UIPath will send an HTTP request to the API, deserialize JSON and store Elis document ID in the DT_Invoices table, so we can use it later to get the results:

  1. Add a new Sequence to the Then branch.
  2. To create a request, insert an HTTP Request activity in the sequence. Set the value of endpoint to uploadEndpoint and the request method to POST. The result should be stored in the resultJSON variable. Then, add an argument named file to the Attachments and set its value to filePath. BodyFormat should be set to multipart/form-data. Finally, we need to add the Authorization key with value authKey to the headers.

You may want to print some messages to see the progress of the process running in the background. Using the Write Line activity, it is possible to log the intermediate results that will show up in the Output panel under the Trace tab.

  1. After the HTTP request, add a Write Line activity and set text to resultJSON.ToString.
  2. Add a Deserialize JSON activity with resultJSON as an input and jsonDes as an output. TypeArgument must be set to JObject.
  3. Assign documentID the value of jsonDes(“id”).ToString, using the Assign activity.
  4. Use Add Data Row activity to add {fileName, filePath, documentID} to DT_Invoices. This will create a row with filename, filepath and Elis document ID of each file.

Get extracted data

After the uploading is set up, we’ll add a sequence of activities to get processed data for the uploaded invoices, parse JSON and store them in a DataTable. We need to repeat the process for each uploaded file, so we’ll use For Each Row activity to loop through all IDs stored in the DT_Invoices table.

  1. Create the following variables:
    • invoiceName (String) – name of the invoice
    • valuesDict (Dictionary(Of String, String)) – we’ll use this to store the extracted field values, with keys and values representing field names and field values respectively
    • getEndpoint (String) – endpoint to get the extracted results
    • counter (Int32) – counter for logging of API responses
    • fieldName (String) – a variable to store field names
    • fieldValue (String) – a variable to store field values
    • valueToUpdate (String) – this will store the value to update in the dictionary (in case of multi-instance fields)
    • fieldsToWrite (ICollection(String)) – collection of field values for each row
    • fieldValueToWrite (String) – a field value to add to the current row
  2. Add a For Each Row activity to the main Flowchart. It should loop through each row in DT_Invoices.
  3. Add two Get Row Item activities to the body of the loop. The first one should store values from the column with index 0 to the invoiceName variable. The second activity stores values of the column with index 2 to documentID.
  4. Using an Assign activity, set the value of valuesDict to new Dictionary(Of String, String).
  5. Create an endpoint for polling the API by adding another assign activity. The value of getEndpoint should be a string composed of the endpoint we used for upload, document ID and a query parameter: uploadEndpoint + “/” + documentID + “filter=best”.

The filter parameter set to best is telling the API that we want only the best candidate for each returned field type. Elis Extraction API may output more instances of the same field type with a different score, but it is recommended to use an output with the best candidates only. In the case of selected field types, such as Recipient Address, all results are returned in both cases, as these are multi-instance fields and their values will be concatenated together in the same cell, separated by newline character. At this point, Get extracted data should look something like this:

Next, we initialize a counter that will be used when printing the intermediate results to the Output console.

  1. Add an Assign activity and set counter to 0.

Now we can set up collecting of the extracted data from the API. We will need a Do While activity, which will iterate through every document and send a request in regular intervals, waiting until the results are ready. To be able to work with the data, we’ll need to deserialize the JSON response and store the field values to the valuesDict dictionary created before.

  1. In the Body of Get extracted data, add a Do While activity with the condition jsonDes(“status”).ToString = “processing”. This will check for the document status and wait until the processing is done.
  2. To update the counter, just set counter to counter + 1.
  3. Inside the loop, add another HTTP Request activity. This time, we will use the GET method and set the value of EndPoint to getEndpoint. As before, authKey must be added to Headers.
  4. To print the results to the Output console, add two Write Line activities and set their text values to counter.ToString and resultJSON.ToString.
  5. Add a Deserialize JSON activity with resultJSON as an input and jsonDes as an output, with TypeArgument set to JObject as before.
  6. After deserializing the JSON, add an If activity, checking if the status is equal to processing. Then, insert a Delay activity with duration 00:00:05 to the Then branch, so that document status is checked every 5 seconds. Feel free to choose a different interval, but the minimum should be at least 500 ms.

Our Do While activity should look like this:

  1. To parse the JSON, insert a For Each activity after the Do While, iterating through each field in jsonDes(“fields”).
  2. Add a Flowchart activity to the Body of the loop.

When saving the results to a dictionary, we need to decide what to do with each value based on a few conditions. First, it should check if the field with a given name already exists in the dictionary, which means that it is a multi-instance field (remember that we set filter to best, so only multi-instance field types should appear more than once). If it doesn’t, it is checked if a field name equals to “tax_details”, as these are nested group fields, logically grouping associated values (tax_details field contains other fields). If it is the case, the last bit of JSON parsing will be needed.

  1. Add a Flow Decision activity and set the condition to dictionary.ContainsKey(fieldName).
  2. Insert a Sequence activity and connect it to the output of the True branch of the Flow Decision.
  3. Inside the Sequence, create a String variable valueToUpdate, and assign it the value of valuesDict(fieldName). Then add a Remove From Dictionary activity, removing a fieldName key from valuesDict, and an Add To Dictionary activity that will insert fieldName key with the updated value to the dictionary. Updated value is the old value concatenated with the new one: valueToUpdate + “\n” + field(“value”).ToString.

  1. Going back to the parent Flowchart, add another Flow Decision, checking if fieldName is equal to “tax_details” and connect it with the False branch of the first Flow Decision.
  2. If we are not dealing with tax details, we just store the result, so False branch should lead to a new Add To Dictionary activity, adding fieldName key with field(“value”).ToString to the dictionary.
  3. Connect the False branch to a new For Each activity. In the Body of the activity, assign fieldName to item(“name”).ToString and fieldValue to item(“value”).ToString. Then add an If activity checking for valuesDict.ContainsKey(fieldName). The rest is the same as in step (13) – values are concatenated or just added to the dictionary.

In the last For Each activity, we will get the values for the field names we selected in the beginning and store them in the DataTable that can be easily converted to CSV format in the last sequence of the workflow.

  1. To create a row, first set fieldsToWrite to new List(of String) by adding an Assign activity, then use Add To Collection activity to add invoiceName to fieldsToWrite.
  2. Add a For Each activity, looping through each field in fields. This will add remaining values to the row.
  3. Inside the Body of the loop, add an If activity with condition dictionary.ContainsKey(field.ToString). If the condition evaluates to True, we just assign dictionary(field.ToString) to fieldValueToWrite. Otherwise, we will choose some default value, such as “NA”.
  4. Add an Add To Collection activity after the If statement and store the value of item fieldValueToWrite to the fieldsToWrite collection. This will create a table row of values for each processed document.

  1. After the For Each loop, add an Add Data Row activity with ArrayRow set to fieldsToWrite.ToArray and DataTable set to DT_Extracted.

Save to CSV

The last sequence is simple – it just saves the results to a CSV file and outputs a message box, notifying the user that the process has finished.

      1. Add a Write CSV activity and specify the path to an output file and a DataTable from which to read the data.
      2. Insert a Message Box activity and set the text to “Finished processing”, or any other string.

What next?

In this tutorial, you learned how to use the Data Extraction API to automatically extract data from invoices in a variety of formats, and including scanned invoices. This is great if you need to fully automate a process, perhaps at the cost of lower accuracy; maybe you can cross-check invoices with purchase orders, and single out mismatching outliers for manual processing; or you are just looking to create statistics from invoice corpus. (You can also add manual regex rules to compensate for any mistakes the AI engine could make on invoices you process very frequently, to make the process even more robust.)

But what if you just want users to verify the automatically extracted information directly?

We got you covered, of course! The full-fledged Elis includes a verification interface, which we optimized for the best user experience with the sole mission: speed up human data capture as much as possible. Typically, this interface completely supersedes any invoice data entry form that would otherwise be involved in the process – the set of captured data is fully customizable. This Elis version can be used without any programming (emails in, CSV downloads out), but is also accessible through our Document Management API and has extremely flexible integration options.

To get a demo, just contact us.


17 thoughts on “Automating data extraction from invoices using Rossum API and UiPath

  1. Avatar Rossum says:

    Hello Indra – thank you for your message. We do not currently support Thai language, but are working on adding additional languages. We currently support English, Czech, German (DE) and Slovak languages.

  2. Avatar sandeep kumar says:

    hello i have specific invoice with different format but all contain Invoice Number,Invoice date. I want to extract these thing from all invoice pdf and want to store into csv how to do that please help

    • Avatar Rossum says:

      Hello Sandeep – thank you for your message. Someone from our team will be in touch with you shortly to discuss your use case.

  3. Avatar Paul Pu says:

    Hello we’d like to read each item line in the invoice, not only general information of id, date, etc. Is it possible?

    • Avatar Rossum says:

      Hello Jie, thank you for your message. Someone from our team will be in touch with you shortly to discuss your use case.

    • Avatar Rossum says:

      Hello Sarmad, thank you for your message. Someone from our team will be in touch with you shortly to schedule a demo.

  4. Hello, this article is amazing. I would like to know if our eform includes a lot of checkbox and data tables, would mind to provide me more information how about your solution to manage it. Thx a lot

    • Avatar Rossum says:

      Hello Simon – thank you for your message. Someone from our team will be in touch with you shortly to provide you with more information.

  5. Avatar San says:

    I tried to use the UiPath code block and seems there are some issues which I am facing related to UiPath HTTP Request activity ( Error – The request was aborted: Could not create SSL/TLS secure channel .

    So I tried to execute “Example in command line” code which is copy pasted from the developers home page. I am getting the below error in the command prompt as well.

    400 Bad Request
    Bad Request
    The browser (or proxy) sent a request that this server could not understand.

    Currently I am creating a POC along with UiPath activities, so please help me to solve the issues so that I can use this ai solution as part of my demo.


    • Avatar Petr Baudis says:

      Hello San, can you make sure a proxy, e.g. within a corporate network, isn’t involved? The easiest way is to run the same command from within the network and from a different network (e.g. via cellphone tethering). If this resolves the issue, you will need to talk to your network admins to enable internet access for you to Elis servers.

  6. Avatar Bruno Dalmazo says:

    Hi Petr, I’m receiving the same error and I tried at 3 different networks.

    400 Bad Request
    Bad Request
    The browser (or proxy) sent a request that this server could not understand.

    I’m trying using javascript inside my ERP (netSuite) and also at postman, both are with this response.

    var api_Key = ‘***’;
    var url = ‘’;
    var postdata = ‘file url download’;

    var headers = new Array();

    headers[‘Authorization’] = ‘secret_key ‘ + api_Key;
    headers[‘Accept’] = ‘application/json’;
    headers[‘Content-Type’] = ‘multipart/form-data’;

    var response = nlapiRequestURL(url,postdata,headers, ‘POST’)

    • Avatar Rossum says:

      Hi Bruno, thanks for getting in touch. Your error appears to in fact have a different cause than San’s – I would wager that it’s because you are not passing the file contents specified as the `file` form parameter in the postdata. But it seems that someone from our support is also already in touch with you – hope you get that issue sorted out quickly!

Leave a Reply