When you think about artificial intelligence and data integration, what's the first thing that comes to mind?
Maybe deep learning neural nets crunching away at big data sets?
Or aggregating data from dozens or hundreds of repositories and streaming it into analytics or business intelligence platforms? Or maybe predictive healthcare and extending human life?
But that has changed. Intelligent document processing platforms now offer a streamlined approach that produce real results. The process isn't easy or for the faint of heart, but the technology has finally arrived.
But why now? What's the difference between long-standing capture tools and this new breed of cognitive document processing?
This form of document processing decreases operating costs, enhances customer and employee satisfaction, and makes it easier to stay compliant with regulations.
Did you know that most document processing solutions out there can't handle your everyday document data tasks? It is tough to know what technologies you need to get great processing.
But this guide is here to help you understand what makes the best document processing software, and why. Get the information-packed, 6-page guide:
For the sake of this article, I'm going to call a few different categories of things documents:
Many industries rely on documents for important workflows:
Organizations in nearly every industry are saturated with paper, and storing millions of archived records (and all this data is a literal gold mine).
While it's true that tools have existed for setting up rigid templates that "know" where certain data is on a document, their use is extremely limited.
In the real world, these templates have caused a lot of suffering because of how fragile they are. If a word or number is just outside of where the template is looking, another template must be created to find it. This is hardly scalable.
Computer vision (CV) is the technology responsible for making scanned documents machine-readable. While all non-text artifacts on a document are no problem for a human to read past, they cause many problems for machines.
Humans understand that a hole punch is not a word, and stamps, lines, barcodes, and images are all just there to support the intent of the document.
But these non-text elements cause big problems for optical character recognition.
OCR is only as good as the document image it runs on. Modern analytics and business intelligence platforms (and neural nets) all require very accurate (and labeled) data. Traditional OCR's low accuracy doesn't produce acceptable data. This is one of the reasons quality cognitive document processing has been difficult to achieve.
New CV algorithms paired with advanced hardware acceleration enables near-100% OCR accuracy using both new and traditional OCR engines.
And handwriting? New advances in computer vision now enable robust handwriting recognition that streams even more information from documents.
The design philosophy behind this approach is that subject matter experts understand their data better than anyone else. As a result, automating their understanding of how A.I. is operating is both easier and achieves better results than a "dark" machine learning model.
This kind of transparency is based on the belief that a subject matter expert will always be able to make better decisions on data than "hidden" A.I.
As previously mentioned, traditional OCR engines need help for maximum performance. Several key OCR innovations are at the core of AI document processing platforms:
Iterative OCR is a technique that captures text missed by an OCR engine after processing text on a document. As the name suggests, OCR is run multiple times.
The key innovation is that accurately recognized text is automatically removed from the document image before performing additional OCR passes. Less distractions makes processing remaining text easier.
Cellular validation is a technique designed to deal with the challenges caused by text split into columns, arranged in offset patterns, or by differing font types and sizes.
The key innovation is that the document image is split into appropriate grids to allow the OCR engine to process each section independently.
Bound region detection enables OCR to focus on just the text within "boxes." Because traditional OCR engines read a document from top to bottom, and left to right, the text in tables is recognized, but out of sequence.
This innovation provides technology with a deep understanding of document structure, and how text inside a box is related to "normal" text on the page.
Layered OCR is a technique designed to process documents with multiple font types, including handwriting. Some types of documents, like checks have been difficult to process.
Layered OCR is an innovative approach because it is designed to run multiple, specific OCR engines until the desired accuracy is achieved.
What happens with OCR results that are less than ideal? OCR synthesis is an innovation that reprocesses OCR results that have a low accuracy confidence score.
Because a confidence rating is assigned to each individual character, groups of characters with low confidence are automatically identified and OCR'd again.
Regular expressions (RegEx) have been used to process text since the 1950's.
Modern data science tools have enabled a new kind of RegEx that allows for less literal character matches. In fact, Fuzzy RegEx enables true machine reading by providing a more organic understanding of text.
The way this innovation works is by "fuzzy matching" results to lexicons and external data sources by using weighted accuracy thresholds. Machines now return results that are "close to" what a user is searching for and that is extremely valuable in discovering data.
Automating document classification is a critical step for accurate data integration.
BUT, if we expect a machine to read and integrate data from documents, creating an understanding of the intent of the document is necessary.
Classification engines use machine learning or rules-based logic to recognize and assign a document type to a page, or a group of pages in a document. Here are three types of classification techniques:
Natural language processing looks at the text of the whole document to interpret context.
The classification engine uses key words or features that identify a document, like a title, section heading, or any specific data element.
Computer vision analyzes the visual structure of a document without using OCR to determine document type.
Advances in machine learning enable users to train cognitive document processing systems in a visual interface to see exactly how the machine is learning. This makes classifying new document types or troubleshooting problems extremely easy.
Among the many improvements, there is:
Unlocking data trapped in documents is disrupting traditional business through incredible efficiency gains and deep operational insight.
Ready to start your document data integration journey?
This article was updated 12/2/2020.