OCR (Optical Character Recognition) in Jitter

pcm@pcmxa.com's icon

Anyone out there worked with OCR in Jitter? I am looking for pointers and was wondering if there were any success stories out there. What I am trying to do is to take frequent photographs or use video and scan the image for text, copying any text elements found in the image to a txt file for later use. Playing around a bit, it looks like cv.jit.blobs.recon and cv.jit.learn could be used to build a simple one, but I doubt it would be robust enough to work in the wild. So I was wondering if anyone had used of the shelf OCR in combination with max (kind of like MacDictate for speech recognition). Suggestions for either PC or Mac would be appreciated.

Thanks

Patrick

arronlee's icon

I am a newbie here. So I am not very familiar with OCR in Jitter. But I have worked with OCR using .NET on my PC for years. I want to share some information about OCR software with you.
Actually, there are two basic types of core OCR algorithm, which may produce a ranked list of candidate characters.
Matrix matching involves comparing an image to a stored glyph on a pixel-by-pixel basis; it is also known as "pattern matching" or "pattern recognition". This relies on the input glyph being correctly isolated from the rest of the image, and on the stored glyph being in a similar font and at the same scale. This technique works best with typewritten text and does not work well when new fonts are encountered. This is the technique the early physical photocell-based OCR implemented, rather directly.
Feature extraction decomposes glyphs into "features" like lines, closed loops, line direction, and line intersections. These are compared with an abstract vector-like representation of a character, which might reduce to one or more glyph prototypes. General techniques of feature detection in computer vision are applicable to this type of OCR, which is commonly seen in "intelligent" handwriting recognition and indeed most modern OCR software. Nearest neighbour classifiers such as the k-nearest neighbors algorithm are used to compare image features with stored glyph features and choose the nearest match.
Software such as Cuneiform and Tesseract use a two-pass approach to character recognition. The second pass is known as "adaptive recognition" and uses the letter shapes recognized with high confidence on the first pass to better recognize the remaining letters on the second pass. This is advantageous for unusual fonts or low-quality scans where the font is distorted (e.g. blurred or faded).I hope this information will be helpful to you. And I hope you success. Good luck.

Best regards,

Arron