Invoice-in-a-box – 4 steps to success

Oct 04

Invoices are one of the highest demanded documents to automate. Let’s talk a little about what it takes to be successful in invoice processing. Data Capture is the technology used for invoices. This is where you extract field-by-field the information you want from the invoice in field order. In order to automate invoices with the high accuracy and utilize a boxed invoice solution you need to do some preparation. Here are 4 MUST have steps:

1.)Separate your commercial invoices from any specialized invoice types such as legal, manufacturing, telecommunication, etc. The reason you do this is because the low hanging fruit when automating invoices is commercial invoices. Software packages have put the most amount of effort in these documents. By working with them first, you are ensuring your success on a large population of your invoices and then can tackle the remainder.

2.)Know how many vendors you have. Understanding the makeup of your invoices is very important. Your focus should be determined by those invoices that are easiest to automate and make up the greatest portion of your entire volume. So make a list of all your vendors and what paper volume percentage each makes up of the whole.

3.)Know if you want to collect line-item data or not. At first glance, majority of companies say they want line-items, only later to change their mind. Find that business process that mandates you collect line items. In your current process, are you having line items entered? What database of existing information will you use to support your line-item extraction? Most companies in the end choose against line-items or choose to extract them for limited critical vendors.

4.)Know how you are going to check the quality of extraction. Quality assurance happens with human review, and business rules. Know before hand how you want those to work. For example a business rule simply could be all line-items must add up to total amount, if they don’t you have someone, look at the entire invoice.

These four steps are not the end-all in proving you invoice processing accuracy, but they are necessary and all steps to consider before you look and purchasing a boxed invoice processing solution.

Chris Riley – About

Find much more about document technologies at

Set it and forget it OCR

Sep 22

My office is a paper monster. Paper comes in and never leaves intact. The scary part is how fast this happens. Paper in hand, review its contents and asses its value, scan it, shred it. Usually within minuets of its existence. The value of set it and forget it OCR is tremendous, but you have to be comfortable.

Set it and forget it OCR is where you take your OCR product and configure it to automatically process any images that appear in a certain folder. For my office, I scan to an “input” folder and all the resulting compressed and OCR’ed PDF files end up in the “File Cabinet” folder. My strategy will not work for the timid because basically I’m relying solely on the power of OCR text and search to retrieve documents when I need them. Most would rather configure their ADF scanner to have a setting or folder for each particular class of documents. Most document scanners anymore have as few as 9 and as many as 99 destinations you can program. You can set each destination as its own input folder with its own OCR settings with its own output folder.

I know I can do this because I know what settings it takes to get the quality of OCR I would need to at least have one or more usable keyword on the document for search.  And after-all, I’m an expert in OCR so to not use it everyday would be crazy in its own right. I’ve yet to be proven wrong, my “File Cabinet” abyss has always given me the information I need at the time I asked for it and sometimes even new information I did not realize I had.

Now for you records management folks shaking your head, I understand your complaint. It should not be about my approach but should be about what I do with the final paper product. For those items that are for legal or business reasons that are deemed as a record by your taxonomy, they should be filed as such, perhaps scanned again as a record, and for heavens sake if you are not supposed to, don’t destroy it!

The purpose of my madness is to touch paper as little as possible, and get information only when I need it. I am an extremist, but I assure you there is serious value, and a little fun in the set it and forget it OCR technique.

Chris Riley – About

Find much more about document technologies at

Squeeze those files

Sep 14

Compression is a great tool for saving hard drive space. You may not currently be thinking about file compression, but you should. It’s very likely that on your machines data is being created at an increasing rate, and your hard-drive space is decreasing at the same fast pace. Organizations and individuals often only consider file compression when there is far to little space left on their hard-drives or the warning messages about too little space start appearing. This is a big risk.

As we create files on our computer, access them, move them, modify them, we are fragmenting the drive. Overly fragmented drives slow down machines and increase risk for damage and corruption. The more files you have, the more this multiplies. Real-time file compression helps with this because as soon as a file is generated, it’s compressed. There is less space being used, and the need to compress in the future is gone. Back-log compression ( compressing in bulk of all your files ) requires a lot of activity on the hard drive and increases the fragmentation. The other risk of bulk conversion is the fact that you only have one chance to get it right.

Bad compression is not just an irritation, it’s a risk. Usually when you compress a file, you are removing the original. The whole purpose is to save space, not use up more by keeping both copies. But because of the need to make sure you are compressing the file correctly, keeping both files waste a lot of space. When doing day-forward compression or real-time compression it’s easy to check as the files come across to make sure at initial setup everything is good, but if you do bulk compression and make a mistake you could have ruined a large library of files.

I firmly believe in file compression, but I know first hand the risk of doing it incorrectly. I now compress files as they are created and no longer have to think about data piling up faster then I can find ways to save space.

Chris Riley – About

Find much more about document technologies at

Space Age Optical Character Recognition

Aug 24

There are a lot of technologists out there who believe that optical character recognition has its days numbered and is an aged technology. The belief is that soon paper will go away. This post is for those who believe OCR technology is going away.

The reality is that paper consumption has not really decreased. In some areas paper has been replaced with electronic data interchange EDI, but in other areas it has actually increased. Studies have also shown that because documents are being scanned more often, there is also an increase in printing when the documents need to be shared or re-purposed. But I’m not here to argue that paper is not going away and that document conversion technologies are required to convert them. I’m here to point out a few futuristic uses of the technology that technologists like to already talk about and involve OCR.

Data Security

The first futuristic use of the technology that I would like to discuss is the use of OCR in data security. Text strings sent over the Internet are far easier to sniff and unlock than a compressed JPEG image. What if you were to convert the text into a JPEG during transmission and the person on the receiving end would OCR it to get the data. By doing so the data has been masked in a more efficient and secretive way. For added security, proprietary image formats could be devised.

File Compression

Storing ASCII text takes up far less space than an image or video file. As apart of the future of compression technologies, expect that OCR will be uesd to extract the text from an image and saved as an ASCII file. Viewers will convert the text back to an image during viewing. This then removes the image portion of the text and significantly reduces file size.


How else to you expect future robots to read text? OCR of course. The eyes of the robot are essentially a camera that takes pictures of images rapidly. When the robot is faced with the comprehension of text, the image will be converted using OCR and fed through an engine to gain meaning from the text and act on it.

So there you have it, three really cool and cutting edge ways OCR is and will be used in the future. Paper is not going away, but even if it were,  just look at the other cool uses of OCR technology.

Chris Riley – About

Find much more about document technologies at

Even OCR needs a helping hand – Quality Assurance

Aug 04

Let’s face it. OCR is not 100% accurate 100% of the time. Accuracy is highly dependent on document type, quality of scan, and document makeup. The reason OCR is so powerful is because it’s not. How do we give OCR the best chance to succeed? There are many ways, what I would like to talk about now is quality assurance.

Quality assurance is usually the final step in any OCR process where a human reviews uncertainties, and business rules based on the OCR result. An uncertainty is a character that the software flags that did not during recognition satisfy a threshold. This process is a balancing act between a desire to limit as much human time as possible and a need to see every possible error but not more.

Starting with review of uncertainties. Here an operator will look at just those characters, words, sentences, that are uncertain. This is determined by the OCR product which will have some indicator of what they are. In full page OCR, often spell checking is used. In Data Capture, usually a review character-by-character of a field is done and you don’t see the rest of the results. Some organizations will set critical fields to be reviewed always no matter the accuracy. Others may decide that a field is useful but does not need to be 100%. Each package has its own variation of “verification mode”. It’s important to know their settings and the levels of uncertainty your documents are showing to plan your quality assurance.

After the characters and words have been checked in Data Capture, there is an additional step in quality assurance, business rules. In this process, the software will apply arbitrary rules the organization creates and check them against the fields, a good example might be “don’t enter anyone in the system who’s birth year is earlier than 1984”. If such a document is found, it is flagged for an operator to check. These rules can be endless and packages today make it very easy to create custom rules. The goal would be to first deploy business rules you have already in place in the manual operation and augment it with rules to enhance accuracy based on the raw OCR results you are seeing.

In some more advanced integrations, the use of a database or body of knowledge is deployed as first round quality assurance that is also still automated.

These two quality assurance steps combined should give any company a chance to achieve the accuracy they are seeking. Companies who fail to recognize or plan for this step are usually the ones that have the biggest challenges using OCR and Data Capture technology.

Chris Riley – About

Find much more about document technologies at

There is OCR and then there is Formatting

Jul 26

What is the greatest difference between the most accurate Optical Character Recognition ( OCR ) products and the least? It might not be what you think. The greatest improvements in OCR in the last 10 years has not been so much on character level recognition, it’s been more about how the engine’s understand the structure of documents. This is called document analysis. Theoretically, if you were to compare two engines that had identical character recognition, but engine A had document analysis and engine B did not, engine A would win.

Document analysis is first how the engine breaks apart components of a document such as paragraphs, lines, columns, graphics, etc. Without this, the engine is OCRing blind, and its assumption is that every object it encounters is text. This sometimes leads to clumping of lines, or OCR of graphics. The second aspect of document analysis is the delivery of formatting in the export that matches the formatting in the document. This can also include font style and color.

With traditional documents you can expect that products with document analysis will get the formatting spot on. This is very important, not only for editing and re-purposing, but also for keeping the readability of a document. Another aspect of document analysis is to determine reading order. For example if you have a multi-column, multi-paragraph page, the software has to decide in what order the paragraphs are read. This is useful during recognition, but also in case a formatted document is converted to a more flat file structure such as TXT file where the order stands a chance of being confused.

The reality is that for clean documents character level recognition is not getting any better, it’s amazingly accurate today. The opportunity to improve is in document analysis and language morphology, but that is another post.

Chris Riley – About

Find much more about document technologies at

Replacement for fax right under our noses

Jul 12

How does a technology first invented in 1843 and executed in 1924 still exist as a primary function in our working lives? I’m talking about fax. The fax technology is old and outdated. I personally avoid fax simply because of principle. But my principle alone will not make big changes in adoption. What people don’t understand is that we have a fax replacement right under our noses, one that is both green and as easy to use.

The combination of a document scanner, imaging software, and email software is a complete fax replacement solution. Instead of typing in phone numbers users, can type in email addresses. In fax you double the amount of paper that exists. Paper in, paper out. With the document scanning approach, you are reducing the paper consumption, paper in, email out. Most document scanners today even ship with a pre-configured “Scan to Email” option. On a production level, systems can be setup in offices, your local Kinkos, wherever, to allow multiple users to access the same document scanner and scan to any email with a basic step-by-step wizard.

Not only is fax to email saving trees, it is also increasing efficiency and when combined with workflow, document imaging, OCR, and data capture, it adds much greater value for that single piece of paper.

These systems do in fact exist in small corners of the world, and I have participated in the development and setup of them. The adoption is still very low. What it comes down to is fear of change. People understand paper to paper. Many users of fax don’t even know what email is. There are two ways this can be solved, time and forced adoption. While I would hope for the second which would be a campaign of replacing all fax machines with scanners, it’s very unlikely and requires unity of multiple competing entities.

No I do not like fax, but I understand it. And I hope that sooner rather than later people see there has been a solution to replace fax that is both saving trees, increasing efficiency and has existed for many years.

Chris Riley – About

Find much more about document technologies at

Measuring Document Automation Efficiency

Jun 29

The two most common question when organizations ask when they are seeking document automation technology is “how fast is it?” and “how accurate is it?”. Many don’t realize that the two are at opposition to each other most of the time. The more accurate a system, the slower it is, and the faster it is, the less accurate. But there is one fatal mistake in all these calculations, and that mistake is how efficiency is calculated.

Most companies who trial data capture, calculate performance on the slowest step which is optical character recognition (OCR). Literally, companies will hit the “read” button and immediately start timing until the read is complete. This is what is considered the speed of the document automation system. This is incorrect.

There is no question that OCR can be a tremendous bottleneck in the entire entry process, but poor OCR could create an even greater bottleneck. Imagine an OCR engine that reads a document with 100 characters in 1 second as compared to an engine that reads the same 100 characters in 3 seconds. Your initial thought is that the first engine would be better, but consider that the first engine may be 60% accurate leaving 40 characters to be manually entered, and the other engine 98% accurate leaving 2 characters to be manually entered or correct. If you consider an average entry speed of 1.6 characters per second then it will take the 40 characters an additional 25 seconds to enter for a total entry time of 26 seconds for the faster engine. For the slower engine it will take an additional 1.25 seconds to enter or edit 2 wrong characters thus a total entry time of 4.25 seconds. This means that end-to-end, the slower engine is 6 times faster in the document automation process then the slower engine.

This simple calculation illustrates the folly in assuming that the slower OCR time makes for a slower overall process. Usually focusing on accuracy has the greatest benefit for an organization unless you are improving the speed of a slower engine with hardware, or two engines are too close to see a benefit.

Chris Riley – About

Find much more about document technologies at

Capture Products, Data Capture Products, confused?

Jun 16

All technology markets are guilty of coming up with at least one or two confusing terms. In the document imaging world, it’s terms with very similar sounding names. They are technically similar, but strictly different.

One of the most confusing things in the imaging world is the difference between Image Capture software often just called Capture, and Data Capture software. Not only are the names confusing, but technically there is a lot of overlap. All data capture products have imaging capabilities, all capture products have basic data capture. The risk of the confusion is replacing one product for the other. For example, organizations that attempt to take the data capture functionality built into a capture application for a full blown project, end with little success and a lot of frustration. Let me explain where they fit.

Capture products have the primary function of delivering quality images in a proper document structure. They often feature image clean-up, review, and page splitting tools that are more advanced then the scanning found in data capture applications. Most demonstrate what is called rubber-band OCR, the reading of a specific coordinate on a page. Some go as far as creating templates where coordinates zones are saved. This is where the solutions get confused with data capture. Until there is a registration of documents and proper forms processing approaches, it is not data capture. The risk of such basic templates is low accuracy and zones that do not always collect data.

Data capture products need images to function, so it was an obvious choice to add scanning to the solutions. These solutions however are better fed by a full capture application that has the performance and additional features such as batch naming, annotations, page splitting, etc. that the organization may require in the resulting image files. For data capture, the purpose of image capture is for getting data only and sometimes neglect the features that are important for image storage and archival.

In the end, both solutions are improving in the other’s territory. Eventually the lines will blur to the point where feature-wise they will be identical, and the benefit of one over the other will be rooted in the vendors expertise, either capture or data capture. If your primary requirement is quality images, the capture vendors solution is best chosen, but if it’s data extraction, then data capture rooted solutions are better.

Chris Riley – About

Find much more about document technologies at

Already digital but still OCRed

May 19

I’ve faced unique projects in the last four years and in a few, the best approach even seemed to contradict my better logic. The projects I’m talking about are ones where the data we were working with was already in a digital format, namely a PDF file that was created digitally. What this meant was that all the text in the PDF was available and 100% accurate. So why then, to accomplish the project’s goals, did we use OCR to read the already digital files as images?

I had intended for all these projects to do a logical parsing of the already digital content so I can get what I want. The problem is that even though the internal structure of the PDF has a logical standard, it’s not used logically 90% of the time by most PDF generating applications. PDF has in it a tolerance for mistakes that allows organizations to deviate quite drastically from the standard. What this means is that not only is the content in each PDF unique per company that generates it, it’s unique per number of applications able to create them. Variations on-top of variations makes logical parsing very difficult. This becomes most obvious when the documents contain tables. Because of this the only way to text parse the PDF properly would be to flatten the internal logic so that they consist of nothing but text, but by doing so you lose some of the information pointing to where tables are and their structure.

You may have guessed by now that all my projects were to parse tables from PDF. Not just any table but specific tables in PDFs where each was a unique format. As I said before, my preference would have been to use the 100% accurate data already in the PDF. In the end what I ended up doing was OCRing the PDFs because they were what is called “pixel perfect” so the accuracy was very high. Now that I was using OCR, I was able to first recognize an entire document and remove everything that was not a table which was determined by my OCR document analysis. Then I was able to use keywords to find the specific table that I wanted. The end result took me about 3 weeks of work for each project, and the result was higher accuracy in table finding, and only slightly less accurate in the text values than a table parsing.

While it seemed most logical to do the parsing, in the end I saved over 5 man-months of work by using OCR.

Chris Riley – About

Find much more about document technologies at